| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal
Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
The itemlen variable used to be referenced in multiple places, but since
reworking the serialization code it's used only in one assert. Fixed by
removing the variable and calling the macro from the assert directly.
Backpatch to 12, where this code was introduced.
Reported-by: Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1zc_ovH9NZd_9ovuiEWkF9yX06URUDdXCmgDydf-bqB5A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The serialization format of multivariate MCV lists included alignment in
order to allow direct access to part of the serialized data, but despite
multiple fixes (see for example commits d85e0f366a and ea4e1c0e8f) this
proved to be problematic.
This commit abandons alignment in the serialized format, and just copies
everything during deserialization. We now also track amount of memory
needed after deserialization (including alignment), which allows us to
deserialize the MCV list in a single pass.
Bump catversion, as this affects contents of pg_statistic_ext_data.
Backpatch to 12, where multi-column MCV lists were introduced.
Author: Tomas Vondra
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2201.1561521148@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function pg_mcv_list_items() returns values stored in MCV items. The
items may contain columns with different data types, so the function was
generating text array-like representation, but in an ad-hoc way without
properly escaping various characters etc.
Fixed by simply building a text[] array, which also makes it easier to
use from queries etc.
Requires changes to pg_proc entry, so bump catversion.
Backpatch to 12, where multi-column MCV lists were introduced.
Author: Tomas Vondra
Reviewed-by: Dean Rasheed
Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When building multi-column MCV lists, we compute base frequency for each
item, i.e. a product of per-column frequencies for values from the item.
As a value may be in multiple groups, the code was scanning the whole
array of groups while adding items to the MCV list. This works fine as
long as the number of distinct groups is small, but it's easy to trigger
trigger O(N^2) behavior, especially after increasing statistics target.
This commit precomputes frequencies for values in all columns, so that
when computing the base frequency it's enough to make a simple bsearch
lookup in the array.
Backpatch to 12, where multi-column MCV lists were introduced.
Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by. This is fine to do since we know
that there will only be one row per group coming from this relation.
However, the logic failed to consider inheritance parent relations. These
can have child relations without a primary key, but even if they did, they
could duplicate one of the parent's rows or one from another child
relation. In this case, those additional GROUP BY columns are required.
Fix this by disabling the optimization for inheritance parent tables.
In v11 and beyond, partitioned tables are fine since partitions cannot
overlap and before v11 partitioned tables could not have a primary key.
Reported-by: Manuel Rigger
Discussion: http://postgr.es/m/CA+u7OA7VLKf_vEr6kLF3MnWSA9LToJYncgpNX2tQ-oWzYCBQAw@mail.gmail.com
Backpatch-through: 9.6
|
|
|
|
|
| |
pgperltidy and reformat-dat-files too, though the latter didn't
find anything to change.
|
|
|
|
|
| |
This reverts commits 4de60244e and b2d69806d. Further thought is
required to make this work properly.
|
|
|
|
|
|
|
|
|
|
|
| |
4de60244e added the call to table_finish_bulk_insert to the
CopyMultiInsertBufferCleanup function. We use a CopyMultiInsertBuffer even
for non-partitioned tables, so having the cleanup do that meant we would
call table_finsh_bulk_insert twice when performing COPY FROM with
a non-partitioned table.
Here we can just remove the direct call in CopyFrom and let
CopyMultiInsertBufferCleanup handle the call instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM. However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition. Before 86b85044e, when we always called the heap functions,
there was no need to call heapam_finish_bulk_insert for partitions since
it only did any work when performing a copy without WAL. For partitioned
tables, this was unsupported anyway, so there was no issue. With pluggable
storage, we can't make any assumptions about what the table AM might want
to do in its equivalent function, so we'd better ensure we always call
table_finish_bulk_insert each partition that's received a row.
For now, we make the table_finish_bulk_insert call whenever we evict a
CopyMultiInsertBuffer out of the CopyMultiInsertInfo. This does mean
that it's possible that we call table_finish_bulk_insert multiple times
per partition, which is not a problem other than being an inefficiency.
Improving this requires a more invasive patch, so let's leave that for
another day.
In passing, move the table_finish_bulk_insert for the target of the COPY
command so that it's only called when we're actually performing bulk
inserts. We don't need to call this when inserting 1 row at a time.
Reported-by: Robert Haas
Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
|
| |
|
|
|
|
|
| |
Author: Alexander Lakhin
Discussion: https://postgr.es/m/af27d1b3-a128-9d62-46e0-88f424397f44@gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
UBSan complains about this. Instead, cast to a suitable type requiring
only 4-byte alignment. DatumGetAnyArrayP() already assumes one can cast
between AnyArrayType and ArrayType, so this doesn't introduce a new
assumption. Back-patch to 9.5, where AnyArrayType was introduced.
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/20190629210334.GA1244217@rfd.leadboat.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic in reorder_grouping_sets to order grouping set elements to
match a pre-specified sort ordering was defective, resulting in
unnecessary sort nodes (though the query output would still be
correct). Repair, simplifying the code a little, and add a test.
Per report from Richard Guo, though I didn't use their patch. Original
bug seems to have been my fault.
Backpatch back to 9.5 where grouping sets were introduced.
Discussion: https://postgr.es/m/CAN_9JTzyjGcUjiBHxLsgqfk7PkdLGXiM=pwM+=ph2LsWw0WO1A@mail.gmail.com
|
|
|
|
|
| |
Using PG_RETURN_LSN() from non-fmgr pg_lsn_in_internal() happened to
work on some platforms, but should just be a plain "return".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of calling pg_lsn_in() in check_recovery_target_lsn and
timestamptz_in() in check_recovery_target_time, reorganize the
respective code so that we don't raise any errors in the check hooks.
The previous code tried to use PG_TRY/PG_CATCH to handle errors in a
way that is not safe, so now the code contains no ereport() calls and
can operate safely within the GUC error handling system.
Moreover, since the interpretation of the recovery_target_time string
may depend on the time zone, we cannot do the final processing of that
string until all the GUC processing is done. Instead,
check_recovery_target_time() now does some parsing for syntax
checking, but the actual conversion to a timestamptz value is done
later in the recovery code that uses it.
Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20190611061115.njjwkagvxp4qujhp%40alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
| |
The date/time values 'current', 'invalid', and 'undefined' were
removed a long time ago, but the code still contains explicit error
handling for the transition. To simplify the code and avoid having to
handle these values everywhere, just remove the recognition of these
tokens altogether now.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 18555b132 we tentatively established a rule that regression
tests should use names containing "regression" for databases, and names
starting with "regress_" for all other globally-visible object names, so
as to circumscribe the side-effects that "make installcheck" could have
on an existing installation.
This commit adds a simple enforcement mechanism for that rule: if the code
is compiled with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS defined, it
will emit a warning (not an error) whenever a database, role, tablespace,
subscription, or replication origin name is created that doesn't obey the
rule. Running one or more buildfarm members with that symbol defined
should be enough to catch new violations, at least in the regular
regression tests. Most TAP tests wouldn't notice such warnings, but
that's actually fine because TAP tests don't execute against an existing
server anyway.
Since it's already the case that running src/test/modules/ tests in
installcheck mode is deprecated, we can use that as a home for tests
that seem unsafe to run against an existing server, such as tests that
might have side-effects on existing roles. Document that (though this
commit doesn't in itself make it any less safe than before).
Update regress.sgml to define these restrictions more clearly, and
to clean up assorted lack-of-up-to-date-ness in its descriptions of
the available regression tests.
Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
Since we generate such names internally, it seems like a good idea
to have a policy of disallowing them for user use, as we do for many
other object types. Otherwise attempts to use them will randomly
fail due to collisions with internally-generated names.
Discussion: https://postgr.es/m/3606.1561747369@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
libpq/libpq-be.h is included by libpq/libpq.h so there is no need to
explicitly include it separately.
Author: Daniel Gustafsson
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/A4852E46-9ED1-4861-A23B-22A83E34A084@yesql.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We forgot to map column numbers to/from the default partition for
various operations, leading to valid cases failing with spurious
errors, such as
ERROR: attribute N of type some_partition has been dropped
It was also possible that the search for conflicting rows in the default
partition when attaching another partition would fail to detect some.
Secondarily, it was also possible that such a search should be skipped
(because the constraint was implied) but wasn't.
Fix all this by mapping column numbers when necessary.
Reported by: Daniel Wilches
Author: Amit Langote
Discussion: https://postgr.es/m/15873-8c61945d6b3ef87c@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
| |
The stated reason for acquiring predicate locks on heap pages hasn't
existed since commit c01262a8, so fix the comment. Perhaps in a later
release we'll also be able to change the code to use tuple locks.
Back-patch all the way.
Reviewed-by: Ashwin Agrawal
Discussion: https://postgr.es/m/CAEepm%3D2GK3FVdnt5V3d%2Bh9njWipCv_fNL%3DwjxyUhzsF%3D0PcbNg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Commit 83e176ec1 moved row sampling functions from analyze.c to
utils/misc/sampling.c, but failed to update comment referring to
the sampling algorithm from Jeff Vitter's paper. Correct the
comment by pointing to utils/misc/sampling.c.
Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK154gp%2BQd%3DcorQOv%2BPmbyVyZBjp_%2Bhb766UJeD1e_ie6XQ%40mail.gmail.com
|
|
|
|
|
|
|
|
| |
Evidenced by failure under RELCACHE_FORCE_RELEASE (buildfarm member
prion).
Author: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqGV=k_Eh4jBiQw66ivvdG+EUkrEYeHTYL1SvDj_YOYV0g@mail.gmail.com
|
|
|
|
|
|
| |
Function was renamed/replaced in
c2fe139c201c48f1133e9fbea2dd99b8efe2fadd but the header comment was
not updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a partitioned tables contains foreign tables as partitions, it is
not possible to implement unique or primary key indexes -- but when
regular indexes are created, there is no reason to do anything other
than ignoring such partitions. We were raising errors upon encountering
the foreign partitions, which is unfriendly and doesn't protect against
any actual problems.
Relax this restriction so that index creation is allowed on partitioned
tables containing foreign partitions, becoming a no-op on them. (We may
later want to redefine this so that the FDW is told to create the
indexes on the foreign side.) This applies to CREATE INDEX, as well as
ALTER TABLE / ATTACH PARTITION and CREATE TABLE / PARTITION OF.
Backpatch to 11, where indexes on partitioned tables were introduced.
Discussion: https://postgr.es/m/15724-d5a58fa9472eef4f@postgresql.org
Author: Álvaro Herrera
Reviewed-by: Amit Langote
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a96c41f has introduced the option for heap, but it still lacked the
variant to control the behavior for toast relations.
While on it, refactor the tests so as they stress more scenarios with
the various values that vacuum_index_cleanup can use. It would be
useful to couple those tests with pageinspect to check that pages are
actually cleaned up, but this is left for later.
Author: Masahiko Sawada, Michael Paquier
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAD21AoCqs8iN04RX=i1KtLSaX5RrTEM04b7NHYps4+rqtpWNEg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move ATExecAlterColumnGenericOptions away from where it was unthinkingly
dropped, in the middle of a lot of ALTER COLUMN TYPE code. I don't have
any high principles about where to put it instead, so let's just put it
after ALTER COLUMN TYPE and before ALTER OWNER, matching existing
decisions about how to order related code stanzas.
Also add the minimal function header comment that the original author
was too cool to bother with.
Along the way, upgrade header comments for nearby ALTER COLUMN TYPE
functions.
Discussion: https://postgr.es/m/14787.1561403130@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch reverts all the code changes of commit e76de8861, which turns
out to have been seriously misguided. We can't wait till later to compute
the definition string for an index; we must capture that before applying
the data type change for any column it depends on, else ruleutils.c will
deliverr wrong/misleading results. (This fine point was documented
nowhere, of course.)
I'd also managed to forget that ATExecAlterColumnType executes once per
ALTER COLUMN TYPE clause, not once per statement; which resulted in the
code being basically completely broken for any case in which multiple ALTER
COLUMN TYPE clauses are applied to a table having non-constraint indexes
that must be rebuilt. Through very bad luck, none of the existing test
cases nor the ones added by e76de8861 caught that, but of course it was
soon found in the field.
The previous patch also had an implicit assumption that if a constraint's
index had a dependency on a table column, so would the constraint --- but
that isn't actually true, so it didn't fix such cases.
Instead of trying to delete unneeded index dependencies later, do the
is-there-a-constraint lookup immediately on seeing an index dependency,
and switch to remembering the constraint if so. In the unusual case of
multiple column dependencies for a constraint index, this will result in
duplicate constraint lookups, but that's not that horrible compared to all
the other work that happens here. Besides, such cases did not work at all
before, so it's hard to argue that they're performance-critical for anyone.
Per bug #15865 from Keith Fiske. As before, back-patch to all supported
branches.
Discussion: https://postgr.es/m/15865-17940eacc8f8b081@postgresql.org
|
|
|
|
|
|
| |
As part of REINDEX CONCURRENTLY, this formerly internal-only error
message becomes potentially user-visible (see regression tests), so
change from errmsg_internal() to errmsg(), and update comment.
|
|
|
|
| |
This code is new in v12, and the defect probably was not user-visible.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The multivariate MCV estimation code may run user-defined operators on
the values in the MCV list, which means that those operators may
potentially leak the values from the MCV list. Guard against leaking
data to unprivileged users by checking that the user has SELECT
privileges on the table or all of the columns referred to by the
statistics.
Additionally, if there are any securityQuals on the RTE (either due to
RLS policies on the table, or accessing the table via a security
barrier view), not all rows may be visible to the current user, even
if they have table or column privileges. Thus we further insist that
the operator be leakproof in this case.
Dean Rasheed, reviewed by Tomas Vondra.
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui=Vdx4N==VV5XOK5dsXfnGgVOz_JhAicB=ZA@mail.gmail.com
|
|
|
|
|
| |
Author: Vik Fearing
Discussion: https://postgr.es/m/150d3e9f-c7ec-3fb3-4fdb-def47c4144af%402ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 6753333f switched from a semaphore-based wait to a latch-based
wait for ProcSleep()/ProcWakeup(), but left behind some stray references
to semaphores.
Back-patch to 9.5.
Reviewed-by: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/CA+hUKGLs5H6zhmgTijZ1OaJvC1sG0=AFXc1aHuce32tKiQrdEA@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
This makes the whole user experience more consistent when bumping into
failures, and more in line with the rewording done via 508300e.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20190514153252.GA22168@alvherre.pgsql
|
|
|
|
|
|
|
|
|
| |
SQL/JSON standard defines that jsonpath 'like_regex' predicate should support
the same set of flags as XQuery/XPath. It appears that implementation of 'q'
flag was missed. This commit fixes that.
Discussion: https://postgr.es/m/CAPpHfdtyfPsxLYiTjp5Ov8T5xGsB5t3CwE5%2B3PS%3DLLwA%2BxTJog%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov
|
|
|
|
|
|
|
|
| |
The list of combining characters to ignore for calculating the display
width of a string (used for example by psql) was wildly outdated and
incorrect.
Discussion: https://www.postgresql.org/message-id/flat/bbb19114-af1e-513b-08a9-61272794bd5c%402ndquadrant.com
|
|
|
|
| |
Author: Daniel Gustafsson
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This record uses one metadata buffer and registers some data associated
to the buffer, but when parsing the record for its description a direct
access to the record data was done, but there is none. This leads
usually to an incorrect description, but can also cause crashes like in
pg_waldump. Instead, fix things so as the parsing uses the data
associated to the metadata block.
This is an oversight from 3d92796, so backpatch down to 11.
Author: Michael Paquier
Description: https://postgr.es/m/20190617013059.GA3153@paquier.xyz
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes an embarrassing oversight I (Andres) made in 737a292b,
namely missing two place where liverows/deadrows were used when
converting those variables to pointers, leading to incrementing the
pointer, rather than the value.
It's not that actually that easy to trigger a crash: One needs tuples
deleted by the current transaction, followed by a tuple deleted in
another session, all in one page. Which is presumably why this hasn't
been noticed before.
Reported-By: Steve Singer
Author: Steve Singer
Discussion: https://postgr.es/m/c7988239-d42c-ddc4-41db-171b23b35e4f@ssinger.info
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This puts back reverted commit de87a084c0a5, with some bug fixes.
When two (or more) transactions are waiting for transaction T1 to release a
tuple-level lock, and transaction T1 upgrades its lock to a higher level, a
spurious deadlock can be reported among the waiting transactions when T1
finishes. The simplest example case seems to be:
T1: select id from job where name = 'a' for key share;
Y: select id from job where name = 'a' for update; -- starts waiting for T1
Z: select id from job where name = 'a' for key share;
T1: update job set name = 'b' where id = 1;
Z: update job set name = 'c' where id = 1; -- starts waiting for T1
T1: rollback;
At this point, transaction Y is rolled back on account of a deadlock: Y
holds the heavyweight tuple lock and is waiting for the Xmax to be released,
while Z holds part of the multixact and tries to acquire the heavyweight
lock (per protocol) and goes to sleep; once T1 releases its part of the
multixact, Z is awakened only to be put back to sleep on the heavyweight
lock that Y is holding while sleeping. Kaboom.
This can be avoided by having Z skip the heavyweight lock acquisition. As
far as I can see, the biggest downside is that if there are multiple Z
transactions, the order in which they resume after T1 finishes is not
guaranteed.
Backpatch to 9.6. The patch applies cleanly on 9.5, but the new tests don't
work there (because isolationtester is not smart enough), so I'm not going
to risk it.
Author: Oleksii Kliukin
Discussion: https://postgr.es/m/B9C9D7CD-EB94-4635-91B6-E558ACEC0EC3@hintbits.com
Discussion: https://postgr.es/m/2815.1560521451@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
WHERE EXISTS (...) queries cannot be executed by Parallel Hash Join
with jointype JOIN_UNIQUE_INNER, because there is no way to make a
partial plan totally unique. The consequence of allowing such plans
was duplicate results from some EXISTS queries.
Back-patch to 11. Bug #15857.
Author: Thomas Munro
Reviewed-by: Tom Lane
Reported-by: Vladimir Kriukov
Discussion: https://postgr.es/m/15857-d1ba2a64bce0795e%40postgresql.org
|
|
|
|
|
| |
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1a710c413ce4c4cd081843e563cde256bb95f490
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Any authenticated user can overflow a stack-based buffer by changing the
user's own password to a purpose-crafted value. This often suffices to
execute arbitrary code as the PostgreSQL operating system account.
This fix is contributed by multiple folks, based on an initial analysis
from Tom Lane. This issue has been introduced by 68e61ee, so it was
possible to make use of it at authentication time. It became more
easily to trigger after ccae190 which has made the SCRAM parsing more
strict when changing a password, in the case where the client passes
down a verifier already hashed using SCRAM. Back-patch to v10 where
SCRAM has been introduced.
Reported-by: Alexander Lakhin
Author: Jonathan Katz, Heikki Linnakangas, Michael Paquier
Security: CVE-2019-10164
Backpatch-through: 10
|
|
|
|
|
| |
Author: Alexander Lakhin
Discussion: https://postgr.es/m/0a5419ea-1452-a4e6-72ff-545b1a5a8076@gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commits 3da73d6839dc and de87a084c0a5.
This code has some tricky corner cases that I'm not sure are correct and
not properly tested anyway, so I'm reverting the whole thing for next
week's releases (reintroducing the deadlock bug that we set to fix).
I'll try again afterwards.
Discussion: https://postgr.es/m/E1hbXKQ-0003g1-0C@gemulon.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We don't need to restrict column privileges on pg_statistic_ext;
all of that data is OK to read publicly. What we *do* need to do,
which was overlooked by 6cbfb784c, is revoke public read access on
pg_statistic_ext_data; otherwise we still have the same security
hole we started with.
Catversion bump to ensure that installations calling themselves
beta2 will have this fix.
Diagnosis/correction by Dean Rasheed and Tomas Vondra, but I'm
going to go ahead and push this fix ASAP so we get more buildfarm
cycles on it.
Discussion: https://postgr.es/m/8833.1560647898@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
The GRANT in system_views allowed SELECT privileges on various columns in
the pg_statistic_ext catalog, but tableoid was not included in the list.
That made pg_dump fail because it's accessing this column when building
the list of extended statistics to dump.
Discussion: https://postgr.es/m/8833.1560647898%40sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Regular per-column statistics are stored in pg_statistics catalog, which
is however rather difficult to read, so we also have pg_stats view with
a human-reablable version of the data.
For extended statistic the catalog was fairly easy to read, so we did
not have such human-readable view so far. Commit 9b6babfa2d however did
split the catalog into two, which makes querying harder. Furthermore,
we want to show the multi-column MCV list in a way similar to per-column
stats (and not as a bytea value).
This commit introduces pg_stats_ext view, joining the two catalogs and
massaging the data to produce human-readable output similar to pg_stats.
It also considers RLS and access privileges - the data is shown only when
the user has access to all columns the extended statistic is defined on.
Bumped CATVERSION due to adding new system view.
Author: Dean Rasheed, with improvements by me
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since extended statistic got introduced in PostgreSQL 10, there was a
single catalog pg_statistic_ext storing both the definitions and built
statistic. That's however problematic when a user is supposed to have
access only to the definitions, but not to user data.
Consider for example pg_dump on a database with RLS enabled - if the
pg_statistic_ext catalog respects RLS (which it should, if it contains
user data), pg_dump would not see any records and the result would not
define any extended statistics. That would be a surprising behavior.
Until now this was not a pressing issue, because the existing types of
extended statistic (functional dependencies and ndistinct coefficients)
do not include any user data directly. This changed with introduction
of MCV lists, which do include most common combinations of values.
The easiest way to fix this is to split the pg_statistic_ext catalog
into two - one for definitions, one for the built statistic values.
The new catalog is called pg_statistic_ext_data, and we're maintaining
a 1:1 relationship with the old catalog - either there are matching
records in both catalogs, or neither of them.
Bumped CATVERSION due to changing system catalog definitions.
Author: Dean Rasheed, with improvements by me
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
|