aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Ensure static libraries have correct mod time even if ranlib messes it up.Tom Lane2018-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In at least Apple's version of ranlib, the output file is updated to have a mod time equal to the max of the timestamps of its components, and that data only has seconds precision. On a filesystem with sub-second file timestamp precision --- say, APFS --- this can result in the finished static library appearing older than its input files, which causes useless rebuilds and possible outright failures in parallel makes. We've only seen this reported in the field from people using Apple's ranlib with a non-Apple make, because Apple's make doesn't know about sub-second timestamps either so it doesn't decide rebuilds are needed. But Apple's ranlib presumably shares code with at least some BSDen, so it's not that unlikely that the same problem could arise elsewhere. To fix, just "touch" the output file after ranlib finishes. We seem to need this in only one place. There are other calls of ranlib in our makefiles, but they are working on intermediate files whose timestamps are not actually important, or else on an installed static library for which sub-second timestamp precision is unlikely to matter either. (Also, so far as I can tell, Apple's ranlib doesn't mess up the file timestamp in the latter usage anyhow.) In passing, change "ranlib" to "$(RANLIB)" in one place that was bypassing the make macro for no good reason. Per bug #15525 from Jack Kelly (via Alyssa Ross). Back-patch to all supported branches. Discussion: https://postgr.es/m/15525-a30da084f17a1faa@postgresql.org
* Fix minor typo in dsa.c.Thomas Munro2018-11-29
| | | | | Author: Takeshi Ideriha Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F3BF22D%40G01JPEXMBKW04
* Fix handling of synchronous replication for stopping WAL sendersMichael Paquier2018-11-29
| | | | | | | | | | | | | | | | | | | This fixes an oversight from c6c3334 which forgot that if a subset of WAL senders are stopping and in a sync state, other WAL senders could still be waiting for a WAL position to be synced while committing a transaction. However the subset of stopping senders would not release waiters, potentially breaking synchronous replication guarantees. This commit makes sure that even WAL senders stopping are able to release waiters and are tracked properly. On 9.4, this can also trigger an assertion failure when setting for example max_wal_senders to 1 where a WAL sender is not able to find itself as in synchronous state when the instance stops. Reported-by: Paul Guo Author: Paul Guo, Michael Paquier Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com Backpatch-through: 9.4
* Have BufFileSize() ereport() on FileSize() failure.Peter Geoghegan2018-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the responsibility for checking for and reporting a failure from the only current BufFileSize() caller, logtape.c, to BufFileSize() itself. Code within buffile.c is generally responsible for interfacing with fd.c to report irrecoverable failures. This seems like a convention that's worth sticking to. Reorganizing things this way makes it easy to make the error message raised in the event of BufFileSize() failure descriptive of the underlying problem. We're now clear on the distinction between temporary file name and BufFile name, and can show errno, confident that its value actually relates to the error being reported. In passing, an existing, similar buffile.c ereport() + errcode_for_file_access() site is changed to follow the same conventions. The API of the function BufFileSize() is changed by this commit, despite already being in a stable release (Postgres 11). This seems acceptable, since the BufFileSize() ABI was changed by commit aa551830421, which hasn't made it into a point release yet. Besides, it's difficult to imagine a third party BufFileSize() caller not just raising an error anyway, since BufFile state should be considered corrupt when BufFileSize() fails. Per complaint from Tom Lane. Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us Backpatch: 11-, where shared BufFiles were introduced.
* Don't set PAM_RHOST for Unix sockets.Thomas Munro2018-11-28
| | | | | | | | | | | | | | | Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix sockets. This caused Linux PAM's libaudit integration to make DNS requests for that name. It's not exactly clear what value PAM_RHOST should have in that case, but it seems clear that we shouldn't set it to an unresolvable name, so don't do that. Back-patch to 9.6. Bug #15520. Author: Thomas Munro Reviewed-by: Peter Eisentraut Reported-by: Albert Schabhuetl Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
* Do not decode TOAST data for table rewritesTomas Vondra2018-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged using XLOG / FPI records, and thus (correctly) ignored in decoding. But the associated TOAST table is WAL-logged as plain INSERT records, and so was logically decoded and passed to reorder buffer. That has severe consequences with TOAST tables of non-trivial size. Firstly, reorder buffer has to keep all those changes, possibly spilling them to a file, incurring I/O costs and disk space. Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into a hash table, which got discarded only after processing the row from the main heap. But as the main heap is not decoded for rewrites, this never happened, so all the TOAST data accumulated in memory, resulting either in excessive memory consumption or OOM. The fix is simple, as commit e9edc1ba already introduced infrastructure (namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST tables, but it only applied it to system tables. So simply use it for all TOAST data in raw_heap_insert(). That would however solve only the memory consumption issue - the TOAST changes would still be decoded and added to the reorder buffer, and spilled to disk (although without TOAST tuple data, so much smaller). But we can solve that by tweaking DecodeInsert() to just ignore such INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag, instead of skipping them later in ReorderBufferCommit(). Review: Masahiko Sawada Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com Backpatch: 9.4-, where logical decoding was introduced
* Fix jit compilation bug on wide tables.Andres Freund2018-11-27
| | | | | | | | | | | | | | | | | | | | The function generated to perform JIT compiled tuple deforming failed when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd failed to realize that LLVM's getelementptr would treat an int8 index argument as signed, rather than unsigned. That means that a hoff larger than 127 would result in a negative offset being applied. Fix that by widening the index to 32bit. Add a testcase with a wide table. Don't drop it, as it seems useful to verify other tools deal properly with wide tables. Thanks to Justin Pryzby for both reporting a bug and then reducing it to a reproducible testcase! Reported-By: Justin Pryzby Author: Andres Freund Discussion: https://postgr.es/m/20181115223959.GB10913@telsasoft.com Backpatch: 11, just as jit compilation was
* Fix ac218aa4f6 to work on versions before 9.5.Andres Freund2018-11-26
| | | | | | | | | | | | Unfortunately ac218aa4f6 missed the fact that a reference to 'pg_catalog.regnamespace'::regclass wouldn't work before that type is known. Fix that, by replacing the regtype usage with a join to pg_type. Reported-By: Tom Lane Author: Andres Freund Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us Backpatch: 9.5-, like ac218aa4f6
* Update pg_upgrade test for reg* to include regrole and regnamespace.Andres Freund2018-11-26
| | | | | | | | | | | | | | | | | When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were added in 9.5, pg_upgrade's check for reg* types wasn't updated. While regrole currently is safe, regnamespace is not. It seems unlikely that anybody uses regnamespace inside catalog tables across a pg_upgrade, but the tests should be correct nevertheless. While at it, reorder the types checked in the query to be alphabetical. Otherwise it's annoying to compare existing and tested for types. Author: Andres Freund Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com Backpatch: 9.5-, as regrole/regnamespace
* Fix translation of special characters in psql's LaTeX output modes.Tom Lane2018-11-26
| | | | | | | | | | | | | | | latex_escaped_print() mistranslated \ and failed to provide any translation for # ^ and ~, all of which would typically lead to LaTeX document syntax errors. In addition it didn't translate < > and |, which would typically render as unexpected characters. To some extent this represents shortcomings in ancient versions of LaTeX, which if memory serves had no easy way to render these control characters as ASCII text. But that's been fixed for, um, decades. In any case there is no value in emitting guaranteed-to-fail output for these characters. Noted while fooling with test cases added by commit 9a98984f4. Back-patch the code change to all supported versions.
* Update additional float4/8 expected-output files.Tom Lane2018-11-24
| | | | | | | I forgot that the back branches have more variant files than HEAD :-(. Per buildfarm. Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
* Fix float-to-integer coercions to handle edge cases correctly.Tom Lane2018-11-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | ftoi4 and its sibling coercion functions did their overflow checks in a way that looked superficially plausible, but actually depended on an assumption that the MIN and MAX comparison constants can be represented exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8, and dtoi8, resulting in a possibility that values near the MAX limit will be wrongly converted (to negative values) when they need to be rejected. Also, because we compared before rounding off the fractional part, the other three functions threw errors for values that really ought to get rounded to the min or max integer value. Fix by doing rint() first (requiring an assumption that it handles NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already), and by comparing to values that should coerce to float exactly, namely INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies between these six functions. This back-patches commits cbdb8b4c0 and 452b637d4. In the 9.4 branch, also back-patch the portion of 62e2a8dc2 that added PG_INTnn_MIN and related constants to c.h, so that these functions can rely on them. Per bug #15519 from Victor Petrovykh. Patch by me; thanks to Andrew Gierth for analysis and discussion. Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
* Don't allow partitioned indexes in pg_global tablespaceAlvaro Herrera2018-11-23
| | | | | | | Missing in dfa608141982. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f-M3NMTCpv=vDfkoqHbMPFf=3-Z1ud=+1DHH00tC+zLaQ@mail.gmail.com
* Fix another crash in json{b}_populate_recordset and json{b}_to_recordset.Tom Lane2018-11-22
| | | | | | | | | | | | | | | populate_recordset_worker() failed to consider the possibility that the supplied JSON data contains no rows, so that update_cached_tupdesc never got called. This led to a null-pointer dereference since commit 9a5e8ed28; before that it led to a bogus "set-valued function called in context that cannot accept a set" error. Fix by forcing the update to happen. Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited about the bogus error, we could perhaps go back further, but it'd take more work to figure out how to fix it in older branches. Given the lack of field complaints about that aspect, I'm not excited.) Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
* Add needed #include.Tom Lane2018-11-19
| | | | | | | | Per POSIX, WIFSIGNALED and related macros are provided by <sys/wait.h>. Apparently on Linux they're also pulled in by some other inclusions, but BSD-ish systems are pickier. Fixes portability issue in ffa4cbd62. Per buildfarm.
* Handle EPIPE more sanely when we close a pipe reading from a program.Tom Lane2018-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, any program launched by COPY TO/FROM PROGRAM inherited the server's setting of SIGPIPE handling, i.e. SIG_IGN. Hence, if we were doing COPY FROM PROGRAM and closed the pipe early, the child process would see EPIPE on its output file and typically would treat that as a fatal error, in turn causing the COPY to report error. Similarly, one could get a failure report from a query that didn't read all of the output from a contrib/file_fdw foreign table that uses file_fdw's PROGRAM option. To fix, ensure that child programs inherit SIG_DFL not SIG_IGN processing of SIGPIPE. This seems like an all-around better situation since if the called program wants some non-default treatment of SIGPIPE, it would expect to have to set that up for itself. Then in COPY, if it's COPY FROM PROGRAM and we stop reading short of detecting EOF, treat a SIGPIPE exit from the called program as a non-error condition. This still allows us to report an error for any case where the called program gets SIGPIPE on some other file descriptor. As coded, we won't report a SIGPIPE if we stop reading as a result of seeing an in-band EOF marker (e.g. COPY BINARY EOF marker). It's somewhat debatable whether we should complain if the called program continues to transmit data after an EOF marker. However, it seems like we should avoid throwing error in any questionable cases, especially in a back-patched fix, and anyway it would take additional code to make such an error get reported consistently. Back-patch to v10. We could go further back, since COPY FROM PROGRAM has been around awhile, but AFAICS the only way to reach this situation using core or contrib is via file_fdw, which has only supported PROGRAM sources since v10. The COPY statement per se has no feature whereby it'd stop reading without having hit EOF or an error already. Therefore, I don't see any upside to back-patching further that'd outweigh the risk of complaints about behavioral change. Per bug #15449 from Eric Cyr. Patch by me, review by Etsuro Fujita and Kyotaro Horiguchi Discussion: https://postgr.es/m/15449-1cf737dd5929450e@postgresql.org
* Disallow COPY FREEZE on partitioned tablesAlvaro Herrera2018-11-19
| | | | | | | | | | | | | | | This didn't actually work: COPY would fail to flush the right files, and instead would try to flush a non-existing file, causing the whole transaction to fail. Cope by raising an error as soon as the command is sent instead, to avoid a nasty later surprise. Of course, it would be much better to make it work, but we don't have a patch for that yet, and we don't know if we'll want to backpatch one when we do. Reported-by: Tomas Vondra Author: David Rowley Reviewed-by: Amit Langote, Steve Singer, Tomas Vondra
* PANIC on fsync() failure.Thomas Munro2018-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some operating systems, it doesn't make sense to retry fsync(), because dirty data cached by the kernel may have been dropped on write-back failure. In that case the only remaining copy of the data is in the WAL. A subsequent fsync() could appear to succeed, but not have flushed the data. That means that a future checkpoint could apparently complete successfully but have lost data. Therefore, violently prevent any future checkpoint attempts by panicking on the first fsync() failure. Note that we already did the same for WAL data; this change extends that behavior to non-temporary data files. Provide a GUC data_sync_retry to control this new behavior, for users of operating systems that don't eject dirty data, and possibly forensic/testing uses. If it is set to on and the write-back error was transient, a later checkpoint might genuinely succeed (on a system that does not throw away buffers on failure); if the error is permanent, later checkpoints will continue to fail. The GUC defaults to off, meaning that we panic. Back-patch to all supported releases. There is still a narrow window for error-loss on some operating systems: if the file is closed and later reopened and a write-back error occurs in the intervening time, but the inode has the bad luck to be evicted due to memory pressure before we reopen, we could miss the error. A later patch will address that with a scheme for keeping files with dirty data open at all times, but we judge that to be too complicated to back-patch. Author: Craig Ringer, with some adjustments by Thomas Munro Reported-by: Craig Ringer Reviewed-by: Robert Haas, Thomas Munro, Andres Freund Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
* Don't forget about failed fsync() requests.Thomas Munro2018-11-19
| | | | | | | | | | | | If fsync() fails, md.c must keep the request in its bitmap, so that future attempts will try again. Back-patch to all supported releases. Author: Thomas Munro Reviewed-by: Amit Kapila Reported-by: Andrew Gierth Discussion: https://postgr.es/m/87y3i1ia4w.fsf%40news-spur.riddles.org.uk
* Add valgrind suppressions for wcsrtombs optimizationsTomas Vondra2018-11-18
| | | | | | | | | | | | | | | | | | | | | wcsrtombs (called through wchar2char from common functions like lower, upper, etc.) uses various optimizations that may look like access to uninitialized data, triggering valgrind reports. For example AVX2 instructions load data in 256-bit chunks, and gconv does something similar with 32-bit chunks. This is faster than accessing the bytes one by one, and the uninitialized part of the buffer is not actually used. So suppress the bogus reports. The exact stack depends on possible optimizations - it might be AVX, SSE (as in the report by Aleksander Alekseev) or something else. Hence the last frame is wildcarded, to deal with this. Backpatch all the way back to 9.4. Author: Tomas Vondra Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain
* Correct code comments for PartitionedRelPruneInfo structPeter Eisentraut2018-11-15
| | | | | | | | The comments above the PartitionedRelPruneInfo struct incorrectly document how subplan_map and subpart_map are indexed. This seems to have snuck in on 4e232364033. Author: David Rowley <david.rowley@2ndquadrant.com>
* Update executor documentation for run-time partition pruningPeter Eisentraut2018-11-15
| | | | | | | With run-time partition pruning, there is no longer necessarily an executor node for each corresponding plan node. Author: David Rowley <david.rowley@2ndquadrant.com>
* Make reformat-dat-files, reformat-dat-files VPATH safe.Andres Freund2018-11-15
| | | | | | | | | The reformat_dat_file.pl script, added by 372728b0d49552641, supported all the necessary options to make it work in a VPATH build, but the makefile invocations didn't take VPATH into account. Fix that. Discussion: https://postgr.es/m/20181115185303.d2z7wonx23mdfvd3@alap3.anarazel.de Backpatch: 11-, where 372728b0d49552641 was merged
* Use 64 bit type for BufFileSize().Thomas Munro2018-11-15
| | | | | | | | | | | | | | | | | | | | | | | BufFileSize() can't use off_t, because it's only 32 bits wide on some systems. BufFile objects can have many 1GB segments so the total size can exceed 2^31. The only known client of the function is parallel CREATE INDEX, which was reported to fail when building large indexes on Windows. Though this is technically an ABI break on platforms with a 32 bit off_t and we might normally avoid back-patching it, the function is brand new and thus unlikely to have been discovered by extension authors yet, and it's fairly thoroughly broken on those platforms anyway, so just fix it. Defect in 9da0cc35. Bug #15460. Back-patch to 11, where this function landed. Author: Thomas Munro Reported-by: Paul van der Linden, Pavel Oskin Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/15460-b6db80de822fa0ad%40postgresql.org Discussion: https://postgr.es/m/CAHDGBJP_GsESbTt4P3FZA8kMUKuYxjg57XHF7NRBoKnR%3DCAR-g%40mail.gmail.com
* Second try at fixing numeric data passed through an ECPG SQLDA.Tom Lane2018-11-14
| | | | | | | | | | | | | | | | | | In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the grounds that we should duplicate the state of the numeric value's digit buffer even when all the digits are zeroes. However, that still isn't quite right, because another possible state of the digit buffer is buf == digits == NULL (this occurs for a NaN). As the code now stands, it'll invoke memcpy with a NULL source address and zero bytecount, which we know a few platforms crash on. Hence, reinstate the no-copy short-circuit, but make it test specifically for buf != NULL rather than some other condition. In hindsight, the ndigits test (added by commit f2ae9f9c3) was almost certainly meant to fix the NaN case not the all-zeroes case as the associated thread alleged. As before, back-patch to all supported versions. Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24
* Initialize TransactionState and user ID consistently at transaction startMichael Paquier2018-11-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a failure happens when a transaction is starting between the moment the transaction status is changed from TRANS_DEFAULT to TRANS_START and the moment the current user ID and security context flags are fetched via GetUserIdAndSecContext(), or before initializing its basic fields, then those may get reset to incorrect values when the transaction aborts, leaving the session in an inconsistent state. One problem reported is that failing a starting transaction at the first query of a session could cause several kinds of system crashes on the follow-up queries. In order to solve that, move the initialization of the transaction state fields and the call of GetUserIdAndSecContext() in charge of fetching the current user ID close to the point where the transaction status is switched to TRANS_START, where there cannot be any error triggered in-between, per an idea of Tom Lane. This properly ensures that the current user ID, the security context flags and that the basic fields of TransactionState remain consistent even if the transaction fails while starting. Reported-by: Richard Guo Diagnosed-By: Richard Guo Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAN_9JTxECSb=pEPcb0a8d+6J+bDcOZ4=DgRo_B7Y5gRHJUM=Rw@mail.gmail.com Backpatch-through: 9.4
* Fix incorrect results for numeric data passed through an ECPG SQLDA.Tom Lane2018-11-13
| | | | | | | | | | Numeric values with leading zeroes were incorrectly copied into a SQLDA (SQL Descriptor Area), leading to wrong results in ECPG programs. Report and patch by Daisuke Higuchi. Back-patch to all supported versions. Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24
* pg_dump: Fix dumping of WITH OIDS tablesPeter Eisentraut2018-11-13
| | | | | | | | | | | | | A table with OIDs that was the first in the dump output would not get dumped with OIDs enabled. Fix that. The reason was that the currWithOids flag was declared to be bool but actually also takes a -1 value for "don't know yet". But under stdbool.h semantics, that is coerced to true, so the required SET default_with_oids command is not output again. Change the variable type to char to fix that. Reported-by: Derek Nelson <derek@pipelinedb.com>
* Fix const correctness warning.Thomas Munro2018-11-13
| | | | Per buildfarm.
* Fix the initialization of atomic variables introduced by theAmit Kapila2018-11-13
| | | | | | | | | | | | | | group clearing mechanism. Commits 0e141c0fbb and baaf272ac9 introduced initialization of atomic variables in InitProcess which means that it's not safe to look at those for backends that aren't currently in use. Fix that by initializing them during postmaster startup. Reported-by: Andres Freund Author: Amit Kapila Backpatch-through: 9.6 Discussion: https://postgr.es/m/20181027104138.qmbbelopvy7cw2qv@alap3.anarazel.de
* Fix handling of HBA ldapserver with multiple hostnames.Thomas Munro2018-11-13
| | | | | | | | | | | | Commit 35c0754f failed to handle space-separated lists of alternative hostnames in ldapserver, when building a URI for ldap_initialize() (OpenLDAP). Such lists need to be expanded to space-separated URIs. Repair. Back-patch to 11, to fix bug report #15495. Author: Thomas Munro Reported-by: Renaud Navarro Discussion: https://postgr.es/m/15495-2c39fc196c95cd72%40postgresql.org
* Fix possible buffer overrun in hba.c.Thomas Munro2018-11-13
| | | | | | | | | | | | Coverty reports a possible buffer overrun in the code that populates the pg_hba_file_rules view. It may not be a live bug due to restrictions on options that can be used together, but let's increase MAX_HBA_OPTIONS and correct a nearby misleading comment. Back-patch to 10 where this code arrived. Reported-by: Julian Hsiao Discussion: https://postgr.es/m/CADnGQpzbkWdKS2YHNifwAvX5VEsJ5gW49U4o-7UL5pzyTv4vTg%40mail.gmail.com
* Limit the number of index clauses considered in choose_bitmap_and().Tom Lane2018-11-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | classify_index_clause_usage() is O(N^2) in the number of distinct index qual clauses it considers, because of its use of a simple search list to store them. For nearly all queries, that's fine because only a few clauses will be considered. But Alexander Kuzmenkov reported a machine-generated query with 80000 (!) index qual clauses, which caused this code to take forever. Somewhat remarkably, this is the only O(N^2) behavior we now have for such a query, so let's fix it. We can get rid of the O(N^2) runtime for cases like this without much damage to the functionality of choose_bitmap_and() by separating out paths with "too many" qual or pred clauses, and deeming them to always be nonredundant with other paths. Then their clauses needn't go into the search list, so it doesn't get too long, but we don't lose the ability to consider bitmap AND plans altogether. I set the threshold for "too many" to be 100 clauses per path, which should be plenty to ensure no change in planning behavior for normal queries. There are other things we could do to make this go faster, but it's not clear that it's worth any additional effort. 80000 qual clauses require a whole lot of work in many other places, too. The code's been like this for a long time, so back-patch to all supported branches. The troublesome query only works back to 9.5 (in 9.4 it fails with stack overflow in the parser); so I'm not sure that fixing this in 9.4 has any real-world benefit, but perhaps it does. Discussion: https://postgr.es/m/90c5bdfa-d633-dabe-9889-3cf3e1acd443@postgrespro.ru
* Fix error-cleanup mistakes in exec_stmt_call().Tom Lane2018-11-09
| | | | | | | | | | | | | | | | | Commit 15c729347 was a couple bricks shy of a load: we need to ensure that expr->plan gets reset to NULL on any error exit, if it's not supposed to be saved. Also ensure that the stmt->target calculation gets redone if needed. The easy way to exhibit a problem is to set up code that violates the writable-argument restriction and then execute it twice. But error exits out of, eg, setup_param_list() could also break it. Make the existing PG_TRY block cover all of that code to be sure. Per report from Pavel Stehule. Discussion: https://postgr.es/m/CAFj8pRAeXNTO43W2Y0Cn0YOVFPv1WpYyOqQrrzUiN6s=dn7gCg@mail.gmail.com
* Fix missing role dependencies for some schema and type ACLs.Tom Lane2018-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes several related cases in which pg_shdepend entries were never made, or were lost, for references to roles appearing in the ACLs of schemas and/or types. While that did no immediate harm, if a referenced role were later dropped, the drop would be allowed and would leave a dangling reference in the object's ACL. That still wasn't a big problem for normal database usage, but it would cause obscure failures in subsequent dump/reload or pg_upgrade attempts, taking the form of attempts to grant privileges to all-numeric role names. (I think I've seen field reports matching that symptom, but can't find any right now.) Several cases are fixed here: 1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any existing ACL entries for the domain. This case is ancient, dating back as far as we've had pg_shdepend tracking at all. 2. If a default type privilege applies, CREATE TYPE recorded the ACL properly but forgot to install dependency entries for it. This dates to the addition of default privileges for types in 9.2. 3. If a default schema privilege applies, CREATE SCHEMA recorded the ACL properly but forgot to install dependency entries for it. This dates to the addition of default privileges for schemas in v10 (commit ab89e465c). Another somewhat-related problem is that when creating a relation rowtype or implicit array type, TypeCreate would apply any available default type privileges to that type, which we don't really want since such an object isn't supposed to have privileges of its own. (You can't, for example, drop such privileges once they've been added to an array type.) ab89e465c is also to blame for a race condition in the regression tests: privileges.sql transiently installed globally-applicable default privileges on schemas, which sometimes got absorbed into the ACLs of schemas created by concurrent test scripts. This should have resulted in failures when privileges.sql tried to drop the role holding such privileges; but thanks to the bug fixed here, it instead led to dangling ACLs in the final state of the regression database. We'd managed not to notice that, but it became obvious in the wake of commit da906766c, which allowed the race condition to occur in pg_upgrade tests. To fix, add a function recordDependencyOnNewAcl to encapsulate what callers of get_user_default_acl need to do; while the original call sites got that right via ad-hoc code, none of the later-added ones have. Also change GenerateTypeDependencies to generate these dependencies, which requires adding the typacl to its parameter list. (That might be annoying if there are any extensions calling that function directly; but if there are, they're most likely buggy in the same way as the core callers were, so they need work anyway.) While I was at it, I changed GenerateTypeDependencies to accept most of its parameters in the form of a Form_pg_type pointer, making its parameter list a bit less unwieldy and mistake-prone. The test race condition is fixed just by wrapping the addition and removal of default privileges into a single transaction, so that that state is never visible externally. We might eventually prefer to separate out tests of default privileges into a script that runs by itself, but that would be a bigger change and would make the tests run slower overall. Back-patch relevant parts to all supported branches. Discussion: https://postgr.es/m/15719.1541725287@sss.pgh.pa.us
* Fix dependency handling of partitions and inheritance for ON COMMITMichael Paquier2018-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes a set of issues with ON COMMIT actions when used on partitioned tables and tables with inheritance children: - Applying ON COMMIT DROP on a partitioned table with partitions or on a table with inheritance children caused a failure at commit time, with complains about the children being already dropped as all relations are dropped one at the same time. - Applying ON COMMIT DELETE on a partition relying on a partitioned table which uses ON COMMIT DROP would cause the partition truncation to fail as the parent is removed first. The solution to the first problem is to handle the removal of all the dependencies in one go instead of dropping relations one-by-one, based on a suggestion from Álvaro Herrera. So instead all the relation OIDs to remove are gathered and then processed in one round of multiple deletions. The solution to the second problem is to reorder the actions, with truncation happening first and relation drop done after. Even if it means that a partition could be first truncated, then immediately dropped if its partitioned table is dropped, this has the merit to keep the code simple as there is no need to do existence checks on the relations to drop. Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE does not cascade to its partitions. The ON COMMIT action defined on each partition gets the priority. Author: Michael Paquier Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp Backpatch-through: 10
* Disallow setting client_min_messages higher than ERROR.Tom Lane2018-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously it was possible to set client_min_messages to FATAL or PANIC, which had the effect of suppressing transmission of regular ERROR messages to the client. Perhaps that seemed like a useful option in the past, but the trouble with it is that it breaks guarantees that are explicitly made in our FE/BE protocol spec about how a query cycle can end. While libpq and psql manage to cope with the omission, that's mostly because they are not very bright; client libraries that have more semantic knowledge are likely to get confused. Notably, pgODBC doesn't behave very sanely. Let's fix this by getting rid of the ability to set client_min_messages above ERROR. In HEAD, just remove the FATAL and PANIC options from the set of allowed enum values for client_min_messages. (This change also affects trace_recovery_messages, but that's OK since these aren't useful values for that variable either.) In the back branches, there was concern that rejecting these values might break applications that are explicitly setting things that way. I'm pretty skeptical of that argument, but accommodate it by accepting these values and then internally setting the variable to ERROR anyway. In all branches, this allows a couple of tiny simplifications in the logic in elog.c, so do that. Also respond to the point that was made that client_min_messages has exactly nothing to do with the server's logging behavior, and therefore does not belong in the "When To Log" subsection of the documentation. The "Statement Behavior" subsection is a better match, so move it there. Jonah Harris and Tom Lane Discussion: https://postgr.es/m/7809.1541521180@sss.pgh.pa.us Discussion: https://postgr.es/m/15479-ef0f4cc2fd995ca2@postgresql.org
* Revise attribute handling code on partition creationAlvaro Herrera2018-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original code to propagate NOT NULL and default expressions specified when creating a partition was mostly copy-pasted from typed-tables creation, but not being a great match it contained some duplicity, inefficiency and bugs. This commit fixes the bug that NOT NULL constraints declared in the parent table would not be honored in the partition. One reported issue that is not fixed is that a DEFAULT declared in the child is not used when inserting through the parent. That would amount to a behavioral change that's better not back-patched. This rewrite makes the code simpler: 1. instead of checking for duplicate column names in its own block, reuse the original one that already did that; 2. instead of concatenating the list of columns from parent and the one declared in the partition and scanning the result to (incorrectly) propagate defaults and not-null constraints, just scan the latter searching the former for a match, and merging sensibly. This works because we know the list in the parent is already correct and there can only be one parent. This rewrite makes ColumnDef->is_from_parent unused, so it's removed on branch master; on released branches, it's kept as an unused field in order not to cause ABI incompatibilities. This commit also adds a test case for creating partitions with collations mismatching that on the parent table, something that is closely related to the code being patched. No code change is introduced though, since that'd be a behavior change that could break some (broken) working applications. Amit Langote wrote a less invasive fix for the original NOT NULL/defaults bug, but while I kept the tests he added, I ended up not using his original code. Ashutosh Bapat reviewed Amit's fix. Amit reviewed mine. Author: Álvaro Herrera, Amit Langote Reviewed-by: Ashutosh Bapat, Amit Langote Reported-by: Jürgen Strobel (bug #15212) Discussion: https://postgr.es/m/152746742177.1291.9847032632907407358@wrigleys.postgresql.org
* Disable recheck_on_update optimization to avoid crashes.Tom Lane2018-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code added by commit c203d6cf8 causes a crash in at least one case, where a potentially-optimizable expression index has a storage type different from the input data type. A cursory code review turned up numerous other problems that seem impractical to fix on short notice. Andres argued for revert of that patch some time ago, and if additional senior committers had been paying attention, that's likely what would have happened, but we were not :-( At this point we can't just revert, at least not in v11, because that would mean an ABI break for code touching relcache entries. And we should not remove the (also buggy) support for the recheck_on_update index reloption, since it might already be used in some databases in the field. So this patch just does the as-little-invasive-as-possible measure of disabling the feature as though recheck_on_update were forced off for all indexes. I also removed the related regression tests (which would otherwise fail) and the user-facing documentation of the reloption. We should undertake a more thorough code cleanup if the patch can't be fixed, but not under the extreme time pressure of being already overdue for 11.1 release. Per report from Ondřej Bouda and subsequent private discussion among pgsql-release. Discussion: https://postgr.es/m/20181106185255.776mstcyehnc63ty@alvherre.pgsql
* GUC: adjust effective_cache_size SQL descriptionsBruce Momjian2018-11-06
| | | | | | | | | | Follow on patch for commit 3e0f1a4741f564c1a2fa6e944729d6967355d8c7. Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/369ec766-b947-51bd-4dad-6fb9e026439f@2ndquadrant.com Backpatch-through: 9.4
* Rename rbtree.c functions to use "rbt" prefix not "rb" prefix.Tom Lane2018-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | The "rb" prefix is used by Ruby, so that our existing code results in name collisions that break plruby. We discussed ways to prevent that by adjusting dynamic linker options, but it seems that at best we'd move the pain to other cases. Renaming to avoid the collision is the only portable fix anyway. Fortunately, our rbtree code is not (yet?) widely used --- in core, there's only a single usage in GIN --- so it seems likely that we can get away with a rename. I chose to do this basically as s/rb/rbt/g, except for places where there already was a "t" after "rb". The patch could have been made smaller by only touching linker-visible symbols, but it would have resulted in oddly inconsistent-looking code. Better to make it look like "rbt" was the plan all along. Back-patch to v10. The rbtree.c code exists back to 9.5, but rb_iterate() which is the actual immediate source of pain was added in v10, so it seems like changing the names before that would have more risk than benefit. Per report from Pavel Raiskup. Discussion: https://postgr.es/m/4738198.8KVIIDhgEB@nb.usersys.redhat.com
* Stamp 11.1.Tom Lane2018-11-05
|
* Fix copy-paste error in errhint() introduced in 691d79a07933.Andres Freund2018-11-05
| | | | | | Reported-By: Petr Jelinek Discussion: https://postgr.es/m/c95a620b-34f0-7930-aeb5-f7ab804f26cb@2ndquadrant.com Backpatch: 9.4-, like the previous commit
* Translation updatesPeter Eisentraut2018-11-05
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 707f81a8bc147ef576cbddd13069c7ae97c76307
* Block creation of partitions with open references to its parentMichael Paquier2018-11-05
| | | | | | | | | | | | | | | | | | | When a partition is created as part of a trigger processing, it is possible that the partition which just gets created changes the properties of the table the executor of the ongoing command relies on, causing a subsequent crash. This has been found possible when for example using a BEFORE INSERT which creates a new partition for a partitioned table being inserted to. Any attempt to do so is blocked when working on a partition, with regression tests added for both CREATE TABLE PARTITION OF and ALTER TABLE ATTACH PARTITION. Reported-by: Dmitry Shalashov Author: Amit Langote Reviewed-by: Michael Paquier, Tom Lane Discussion: https://postgr.es/m/15437-3fe01ee66bd1bae1@postgresql.org Backpatch-through: 10
* Ignore partitioned tables when processing ON COMMIT DELETE ROWSMichael Paquier2018-11-05
| | | | | | | | | | | | | Those tables have no physical storage, making this option unusable with partition trees as at commit time an actual truncation was attempted. There are still issues with the way ON COMMIT actions are done when mixing several action types, however this impacts as well inheritance trees, so this issue will be dealt with later. Reported-by: Rajkumar Raghuwanshi Author: Amit Langote Reviewed-by: Michael Paquier, Tom Lane Discussion: https://postgr.es/m/CAKcux6mhgcjSiB_egqEAEFgX462QZtncU8QCAJ2HZwM-wWGVew@mail.gmail.com
* Fix ExecuteCallStmt to not scribble on the passed-in parse tree.Tom Lane2018-11-04
| | | | | | | | | Modifying the parse tree at execution time is, or at least ought to be, verboten. It seems quite difficult to actually cause a crash this way in v11 (although you can exhibit it pretty easily in HEAD by messing with plan_cache_mode). Nonetheless, it's risky, so fix and back-patch. Discussion: https://postgr.es/m/13789.1541359611@sss.pgh.pa.us
* Fix bugs in plpgsql's handling of CALL argument lists.Tom Lane2018-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | exec_stmt_call() tried to extract information out of a CALL statement's argument list without using expand_function_arguments(), apparently in the hope of saving a few nanoseconds by not processing defaulted arguments. It got that quite wrong though, leading to crashes with named arguments, as well as failure to enforce writability of the argument for a defaulted INOUT parameter. Fix and simplify the logic by using expand_function_arguments() before examining the list. Also, move the argument-examination to just after producing the CALL command's plan, before invoking the called procedure. This ensures that we'll track possible changes in the procedure's argument list correctly, and avoids a hazard of the plan cache being flushed while the procedure executes. Also fix assorted falsehoods and omissions in associated documentation. Per bug #15477 from Alexey Stepanov. Patch by me, with some help from Pavel Stehule. Back-patch to v11. Discussion: https://postgr.es/m/15477-86075b1d1d319e0a@postgresql.org Discussion: https://postgr.es/m/CAFj8pRA6UsujpTs9Sdwmk-R6yQykPx46wgjj+YZ7zxm4onrDyw@mail.gmail.com
* Fix unused-variable warning.Tom Lane2018-11-04
| | | | Discussion: https://postgr.es/m/CAMkU=1xTHkS6d0iptCWykHc1Xrh3LBic_gZDo3JzDYru815fLQ@mail.gmail.com
* Prevent generating EEOP_AGG_STRICT_INPUT_CHECK operations when nargs == 0.Andres Freund2018-11-03
| | | | | | | | | | | | | | This only became a problem with 4c640f4f38, which didn't synchronize the value agg_strict_input_check.nargs is set to, with the guard condition for emitting the operation. Besides such instructions being unnecessary overhead, currently the LLVM JIT provider doesn't support them. It seems more sensible to avoid generating such instruction than supporting them. Add assertions to make it easier to debug a potential further occurance. Discussion: https://postgr.es/m/2a505161-2727-2473-7c46-591ed108ac52@email.cz Backpatch: 11-, like 4c640f4f38.