| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
Although not confirmed and probably rare, if the newly allocated memory
is not already zero, this could possibly have caused some problems.
Also reorder the initializations slightly so they match the order of the
struct definition.
Author: Wong, Yi Wen <yiwong@amazon.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
|
|
|
|
|
|
|
|
| |
This appears to have been an omission in the original commit
0d692a0dc9f. All related information_schema views already include
foreign tables.
Reported-by: Nicolas Thauvin <nicolas.thauvin@dalibo.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial implementation of autovacuum work-items used a dynamic
shared memory area (DSA). However, it's argued that dynamic shared
memory is not portable enough, so we cannot rely on it being supported
everywhere; at the same time, autovacuum work-items are now a critical
part of the server, so it's not acceptable that they don't work in the
cases where dynamic shared memory is disabled. Therefore, let's fall
back to a simpler implementation of work-items that just uses
autovacuum's main shared memory segment for storage.
Discussion: https://postgr.es/m/CA+TgmobQVbz4K_+RSmiM9HeRKpy3vS5xnbkL95gSEnWijzprKQ@mail.gmail.com
|
|
|
|
|
|
|
| |
Since we currently only have one protocol, this doesn't make much of a
difference other than the error message.
Author: Yugo Nagata <nagata@sraoss.co.jp>
|
|
|
|
|
|
| |
Avoid unnecessary locking calls when a subscription is disabled.
Author: Yugo Nagata <nagata@sraoss.co.jp>
|
|
|
|
|
|
|
|
|
|
|
| |
The original code (since 00e6a16d01) was assuming aborting the
transaction in autovacuum launcher was sufficient to release all
resources, but in reality the launcher runs quite a lot of code out of
any transactions. Re-introduce individual cleanup calls to make abort
more robust.
Reported-by: Robert Haas
Discussion: https://postgr.es/m/CA+TgmobQVbz4K_+RSmiM9HeRKpy3vS5xnbkL95gSEnWijzprKQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The API for WaitLatch and friends followed the Unix convention in which
waiting for a socket connection to complete is identical to waiting for
the socket to accept a write. While Windows provides a select(2)
emulation that agrees with that, the native WaitForMultipleObjects API
treats them as quite different --- and for some bizarre reason, it will
report a not-yet-connected socket as write-ready. libpq itself has so
far escaped dealing with this because it waits with select(), but in
libpqwalreceiver.c we want to wait using WaitLatchOrSocket. The semantics
mismatch resulted in replication connection failures on Windows, but only
for remote connections (apparently, localhost connections complete
immediately, or at least too fast for anyone to have noticed the problem
in single-machine testing).
To fix, introduce an additional WL_SOCKET_CONNECTED wait flag for
WaitLatchOrSocket, which is identical to WL_SOCKET_WRITEABLE on
non-Windows, but results in waiting for FD_CONNECT events on Windows.
Ideally, we would also distinguish the two conditions in the API for
PQconnectPoll(), but changing that API at this point seems infeasible.
Instead, cheat by checking for PQstatus() == CONNECTION_STARTED to
determine that we're still waiting for the connection to complete.
(This is a cheat mainly because CONNECTION_STARTED is documented as an
internal state rather than something callers should rely on. Perhaps
we ought to change the documentation ... but this patch doesn't.)
Per reports from Jobin Augustine and Igor Neyman. Back-patch to v10
where commit 1e8a85009 exposed this longstanding shortcoming.
Andres Freund, minor fix and some code review/beautification by me
Discussion: https://postgr.es/m/CAHBggj8g2T+ZDcACZ2FmzX9CTxkWjKBsHd6NkYB4i9Ojf6K1Fw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before commit d3cc37f1d801a6b5cad9bf179274a8, an inheritance parent
whose only children were temp tables of other sessions would end up
as a simple scan of the parent; but with that commit, we end up with
an Append node, per a report from Ashutosh Bapat. Tweak the logic
so that we go back to the old way, and update the function header
comment for partitioning while we're at it.
Ashutosh Bapat, reviewed by Amit Langote and adjusted by me.
Discussion: http://postgr.es/m/CAFjFpReWJr1yTkHU=OqiMBmcYCMoSW3VPR39RBuQ_ovwDFBT5Q@mail.gmail.com
|
|
|
|
|
|
| |
Amit Kapila
Discussion: http://postgr.es/m/CAA4eK1KeQWZOoDmDmGMwuqzPW9JhRS+ditQVFdAfGjNmMZzqMQ@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Stress testing by Andreas Seltenreich disclosed longstanding problems that
occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are
trying to execute a ROLLBACK of an already-failed transaction. In such a
case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction
would skip AbortTransaction and go straight to CleanupTransaction. This
led to an assert failure in an assert-enabled build (due to the ROLLBACK's
portal still having a cleanup hook) or without assertions, to a FATAL exit
complaining about "cannot drop active portal". The latter's not
disastrous, perhaps, but it's messy enough to want to improve it.
We don't really want to run all of AbortTransaction in this code path.
The minimum required to clean up the open portal safely is to do
AtAbort_Memory and AtAbort_Portals. It seems like a good idea to
do AtAbort_Memory unconditionally, to be entirely sure that we are
starting with a safe CurrentMemoryContext. That means that if the
main loop in AbortOutOfAnyTransaction does nothing, we need an extra
step at the bottom to restore CurrentMemoryContext = TopMemoryContext,
which I chose to do by invoking AtCleanup_Memory. This'll result in
calling AtCleanup_Memory twice in many of the paths through this function,
but that seems harmless and reasonably inexpensive.
The original motivation for the assertion in AtCleanup_Portals was that
we wanted to be sure that any user-defined code executed as a consequence
of the cleanup hook runs during AbortTransaction not CleanupTransaction.
That still seems like a valid concern, and now that we've seen one case
of the assertion firing --- which means that exactly that would have
happened in a production build --- let's replace the Assert with a runtime
check. If we see the cleanup hook still set, we'll emit a WARNING and
just drop the hook unexecuted.
This has been like this a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/877ey7bmun.fsf@ansel.ydns.eu
|
|
|
|
| |
Author: Masahiko Sawada <sawada.mshk@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sole useful effect of this function, to check that no catcache
entries have positive refcounts at transaction end, has really been
obsolete since we introduced ResourceOwners in PG 8.1. We reduced the
checks to assertions years ago, so that the function was a complete
no-op in production builds. There have been previous discussions about
removing it entirely, but consensus up to now was that it had some small
value as a cross-check for bugs in the ResourceOwner logic.
However, it now emerges that it's possible to trigger these assertions
if you hit an assert-enabled backend with SIGTERM during a call to
SearchCatCacheList, because that function temporarily increases the
refcounts of entries it's intending to add to a catcache list construct.
In a normal ERROR scenario, the extra refcounts are cleaned up by
SearchCatCacheList's PG_CATCH block; but in a FATAL exit we do a
transaction abort and exit without ever executing PG_CATCH handlers.
There's a case to be made that this is a generic hazard and we should
consider restructuring elog(FATAL) handling so that pending PG_CATCH
handlers do get run. That's pretty scary though: it could easily create
more problems than it solves. Preliminary stress testing by Andreas
Seltenreich suggests that there are not many live problems of this ilk,
so we rejected that idea.
There are more-localized ways to fix the problem; the most principled
one would be to use PG_ENSURE_ERROR_CLEANUP instead of plain PG_TRY.
But adding cycles to SearchCatCacheList isn't very appealing. We could
also weaken the assertions in AtEOXact_CatCache in some more or less
ad-hoc way, but that just makes its raison d'etre even less compelling.
In the end, the most reasonable solution seems to be to just remove
AtEOXact_CatCache altogether, on the grounds that it's not worth trying
to fix it. It hasn't found any bugs for us in many years.
Per report from Jeevan Chalke. Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAM2+6=VEE30YtRQCZX7_sCFsEpoUkFBV1gZazL70fqLn8rcvBA@mail.gmail.com
|
|
|
|
|
| |
Reported by Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoB+ycZ2z-4Ye=6MfQ_r0aV5r6cvVPw4kOyPdp6bHqQoBQ@mail.gmail.com
|
|
|
|
|
| |
This affects mostly code comments, some documentation, and tests.
Official APIs already used "standby".
|
|
|
|
|
|
|
|
|
|
|
| |
The previous message didn't mention the name of the table or the
bounds. Put the table name in the primary error message and the
bounds in the detail message.
Amit Langote, changed slightly by me. Suggestions on the exac
phrasing from Tom Lane, David G. Johnston, and Dean Rasheed.
Discussion: http://postgr.es/m/CA+Tgmoae6bpwVa-1BMaVcwvCCeOoJ5B9Q9-RHWo-1gJxfPBZ5Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
We must advance the oldest XID that can be safely looked up in clog
*before* truncating CLOG, and the oldest XID that can't be reused
*after* truncating CLOG. This assertion, and the accompanying
comment, are confused; remove them.
Reported by Neha Sharma.
Discussion: http://postgr.es/m/CANiYTQumC3T=UMBMd1Hor=5XWZYuCEQBioL3ug0YtNQCMMT5wQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc. The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.
The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain. This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.
Add test cases illustrating the problems, and back-patch to all
supported branches.
Discussion: https://postgr.es/m/15268.1502309024@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 1efc7e538 did a poor job of emulating existing logic for touching
Datums that might be expanded-object pointers. It didn't check for typlen
being -1 first, which meant it could crash on fixed-length pass-by-ref
values, and probably on cstring values as well. It also didn't use
DatumGetPointer before VARATT_IS_EXTERNAL_EXPANDED, which while currently
harmless is not according to documentation nor prevailing style.
I also think the lack of any explanation as to why datumSerialize makes
these particular nonobvious choices is pretty awful, so fix that.
Per report from Jarred Ward. Back-patch to 9.6 where this code came in.
Discussion: https://postgr.es/m/6F61E6D2-2F5E-4794-9479-A429BE1CEA4B@simple.com
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to what was fixed in commit 9915de6c1cb2 for replication slots,
but this time it's related to replication origins: DROP SUBSCRIPTION
attempts to drop the replication origin, but that fails if the
replication worker process hasn't yet marked it unused. This causes
failures in the buildfarm:
ERROR: could not drop replication origin with OID 1, in use by PID 34069
Like the aforementioned commit, fix by having the process running DROP
SUBSCRIPTION sleep until the worker marks the the replication origin
struct as free. This uses a condition variable on each replication
origin shmem state struct, so that the session trying to drop can sleep
and expect to be awakened by the process keeping the origin open.
Also fix a SGML markup in the previous commit.
Discussion: https://postgr.es/m/20170808001433.rozlseaf4m2wkw3n@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 9915de6c1cb2, we introduced a new wait point for replication
slots and incorrectly labelled it as wait event PG_WAIT_LOCK. That's
wrong, so invent an appropriate new wait event instead, and document it
properly.
While at it, fix numerous other problems in the vicinity:
- two different walreceiver wait events were being mixed up in a single
wait event (which wasn't documented either); split it out so that they
can be distinguished, and document the new events properly.
- ParallelBitmapPopulate was documented but didn't exist.
- ParallelBitmapScan was not documented (I think this should be called
"ParallelBitmapScanInit" instead.)
- Logical replication wait events weren't documented
- various symbols had been added in dartboard order in various places.
Put them in alphabetical order instead, as was originally intended.
Discussion: https://postgr.es/m/20170808181131.mu4fjepuh5m75cyq@alvherre.pgsql
|
| |
|
|
|
|
|
| |
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1a0b5e655d7871506c2b1c7ba562c2de6b6a55de
|
|
|
|
|
|
|
|
|
| |
This would lead to failures if local and remote tables have a different
column order. The tests previously didn't catch that because they only
tested the initial data copy. So add another test that exercises the
apply worker.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
| |
The relation attribute map was not initialized for dropped columns,
leading to errors later on.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: Scott Milliken <scott@deltaex.com>
Bug: #14769
|
|
|
|
|
|
|
|
|
|
| |
lo_put() surely should require UPDATE permission, the same as lowrite(),
but it failed to check for that, as reported by Chapman Flack. Oversight
in commit c50b7c09d; backpatch to 9.4 where that was introduced.
Tom Lane and Michael Paquier
Security: CVE-2017-7548
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 3eefc51053f250837c3115c12f8119d16881a2d7 claimed to make
pg_user_mappings enforce the qualifications user_mapping_options had
been enforcing, but its removal of a longstanding restriction left them
distinct when the current user is the subject of a mapping yet has no
server privileges. user_mapping_options emits no rows for such a
mapping, but pg_user_mappings includes full umoptions. Change
pg_user_mappings to show null for umoptions. Back-patch to 9.2, like
the above commit.
Reviewed by Tom Lane. Reported by Jeff Janes.
Security: CVE-2017-7547
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some authentication methods allowed it, others did not. In the client-side,
libpq does not even try to authenticate with an empty password, which makes
using empty passwords hazardous: an administrator might think that an
account with an empty password cannot be used to log in, because psql
doesn't allow it, and not realize that a different client would in fact
allow it. To clear that confusion and to be be consistent, disallow empty
passwords in all authentication methods.
All the authentication methods that used plaintext authentication over the
wire, except for BSD authentication, already checked that the password
received from the user was not empty. To avoid forgetting it in the future
again, move the check to the recv_password_packet function. That only
forbids using an empty password with plaintext authentication, however.
MD5 and SCRAM need a different fix:
* In stable branches, check that the MD5 hash stored for the user does not
not correspond to an empty string. This adds some overhead to MD5
authentication, because the server needs to compute an extra MD5 hash, but
it is not noticeable in practice.
* In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty
string, or a password hash that corresponds to an empty string, is
specified. The user-visible behavior is the same as in the stable branches,
the user cannot log in, but it seems better to stop the empty password from
entering the system in the first place. Secondly, it is fairly expensive to
check that a SCRAM hash doesn't correspond to an empty string, because
computing a SCRAM hash is much more expensive than an MD5 hash by design,
so better avoid doing that on every authentication.
We could clear the password on CREATE/ALTER ROLE also in stable branches,
but we would still need to check at authentication time, because even if we
prevent empty passwords from being stored in pg_authid, there might be
existing ones there already.
Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema.
Security: CVE-2017-7546
|
|
|
|
| |
Reported-by: Peter Geoghegan <pg@bowt.ie>
|
|
|
|
| |
Reported-by: Yugo Nagata <nagata@sraoss.co.jp>
|
|
|
|
|
|
|
|
|
| |
The NOTICE messages about tables being added or removed during
subscription refresh would be incorrect and possibly confusing if the
transaction rolls back, so silence them but keep them available for
debugging.
Discussion: https://www.postgresql.org/message-id/CAD21AoAvaXizc2h7aiNyK_i0FQSa-tmhpdOGwbhh7Jy544Ad4Q%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The callers for GetOldestSafeDecodingTransactionId() all inverted the
argument for the argument introduced in 2bef06d516460. Luckily this
appears to be inconsequential for the moment, as we wait for
concurrent in-progress transaction when assembling a
snapshot. Additionally this could only make a difference when adding a
second logical slot, because only a pre-existing slot could cause an
issue by lowering the returned xid dangerously much.
Reported-By: Antonin Houska
Discussion: https://postgr.es/m/32704.1496993134@localhost
Backport: 9.4-, where 2bef06d516460 was backpatched to.
|
|
|
|
| |
Tidy-up for commit eccead9ed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, it had no effect. Now, if archive_mode=always, it will
work, and if not, you'll get a warning.
Masahiko Sawada, Michael Paquier, and Robert Haas. The patch as
submitted also changed the behavior so that we would write and remove
history files on standbys, but that seems like material for a separate
patch to me.
Discussion: http://postgr.es/m/CAD21AoC2Xw6M=ZJyejq_9d_iDkReC_=rpvQRw5QsyzKQdfYpkw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Supporting ICU 4.2 seems useful because it ships with CentOS 6.
Versions before ICU 4.6 don't support pkg-config, so document an
installation method without using pkg-config.
In ICU 4.2, ucol_getKeywordsForLocale() sometimes returns values that
will not be accepted by uloc_toLanguageTag(). Skip loading keyword
variants in that version.
Reported-by: Victor Wagner <vitus@wagner.pp.ru>
|
|
|
|
|
|
|
|
|
| |
If the table being attached had different attribute numbers than the
parent, the old code could incorrectly decide it needed to be scanned.
Amit Langote, reviewed by Ashutosh Bapat
Discussion: http://postgr.es/m/CA+TgmobexgbBr2+Utw-pOMw9uxaBRKRjMW_-mmzKKx9PejPLMg@mail.gmail.com
|
|
|
|
|
|
| |
This allows a transaction abort to avoid killing those workers.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If it works, then we won't be storing two copies of all the tuples
that were just moved. If not, VACUUM will still take care of it
eventually. Per a report from AP and analysis from Amit Kapila, it
seems that a bulk load can cause splits fast enough that VACUUM won't
deal with the problem in time to prevent bloat.
Amit Kapila; I rewrote the comment.
Discussion: http://postgr.es/m/20170704105728.mwb72jebfmok2nm2@zip.com.au
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If you do ALTER COLUMN SET NOT NULL against an inheritance parent table,
it will recurse to mark all the child columns as NOT NULL as well. This
is necessary for consistency: if the column is labeled NOT NULL then
reading it should never produce nulls.
However, that didn't happen in the case where ALTER ... ADD PRIMARY KEY
marks a target column NOT NULL that wasn't before. That was questionable
from the beginning, and now Tushar Ahuja points out that it can lead to
dump/restore failures in some cases. So let's make that case recurse too.
Although this is meant to fix a bug, it's enough of a behavioral change
that I'm pretty hesitant to back-patch, especially in view of the lack
of similar field complaints. It doesn't seem to be too late to put it
into v10 though.
Michael Paquier, editorialized on slightly by me
Discussion: https://postgr.es/m/b8794d6a-38f0-9d7c-ad4b-e85adf860fc9@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We don't actually support session tickets, since we do not create an SSL
session identifier. But it seems that OpenSSL will issue a session ticket
on-demand anyway, which will then fail when used. This results in
reconnection failures when using ticket-aware client-side SSL libraries
(such as the Npgsql .NET driver), as reported by Shay Rojansky.
To fix, just tell OpenSSL not to issue tickets. At some point in the
far future, we might consider enabling tickets instead. But the security
implications of that aren't entirely clear; and besides it would have
little benefit except for very short-lived database connections, which is
Something We're Bad At anyhow. It would take a lot of other work to get
to a point where that would really be an exciting thing to do.
While at it, also tell OpenSSL not to use a session cache. This doesn't
really do anything, since a backend would never populate the cache anyway,
but it might gain some micro-efficiencies and/or reduce security
exposures.
Patch by me, per discussion with Heikki Linnakangas and Shay Rojansky.
Back-patch to all supported versions.
Discussion: https://postgr.es/m/CADT4RqBU8N-csyZuzaook-c795dt22Zcwg1aHWB6tfVdAkodZA@mail.gmail.com
|
|
|
|
|
|
|
|
| |
ALTER USER ... SET did not support all the syntax variants of ALTER ROLE
... SET. Fix that, and to avoid further deviations of this kind, unify
many the grammar rules for ROLE/USER/GROUP commands.
Reported-by: Pavel Golub <pavel@microolap.com>
|
|
|
|
|
|
| |
Amit Langote
Discussion: http://postgr.es/m/CAFjFpReT_kq_uwU_B8aWDxR7jNGE=P0iELycdq5oupi=xSQTOw@mail.gmail.com
|
|
|
|
|
|
| |
Amit Langote
Discussion: http://postgr.es/m/CAFjFpReT_kq_uwU_B8aWDxR7jNGE=P0iELycdq5oupi=xSQTOw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a table, the constraint can be considered validated immediately,
because the table must be empty. But for a foreign table this is
not necessarily the case.
Fixes a bug in commit f27a6b15e6566fba7748d0d9a3fc5bcfd52c4a1b.
Amit Langote, with some changes by me.
Discussion: http://postgr.es/m/d2b7419f-4a71-cf86-cc99-bfd0f359a1ea@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some of these comments wrongly implied that only an AFTER ROW trigger
will cause a 'wholerow' attribute to be present for a foreign table,
but a BEFORE ROW trigger can have the same effect. Others implied
that it would always be present for a foreign table, but that's not
true either.
Etsuro Fujita and Robert Haas
Discussion: http://postgr.es/m/10026bc7-1403-ef85-9e43-c6100c1cc0e3@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, partitioned tables with RETURNING expressions or subject
to a WITH CHECK OPTION do not work properly.
Amit Langote, reviewed by Amit Khandekar and Etsuro Fujita. A few
comment changes by me.
Discussion: http://postgr.es/m/9a39df80-871e-6212-0684-f93c83be4097@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
init_htab(), with #define HASH_DEBUG, prints a bunch of hashtable
parameters. It used to also print nentries, but commit 44ca4022f changed
that to "hash_get_num_entries(hctl)", which is wrong (the parameter should
be "hashp").
Rather than correct the coding, though, let's just remove that field from
the printout. The table must be empty, since we just finished building
it, so expensively calculating the number of entries is rather pointless.
Moreover hash_get_num_entries makes assumptions (about not needing locks)
which we could do without in debugging code.
Noted by Choi Doo-Won in bug #14764. Back-patch to 9.6 where the
faulty code was introduced.
Discussion: https://postgr.es/m/20170802032353.8424.12274@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
| |
This fixes a crash if the local table has a function index and the
function makes non-immutable calls.
Reported-by: Scott Milliken <scott@deltaex.com>
Author: Masahiko Sawada <sawada.mshk@gmail.com>
|