aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Avoid need for valgrind suppressions for pg_atomic_init_u64 on some platforms.Andres Freund2020-06-08
| | | | | | | | | | | | | | | | | | | | Previously we used pg_atomic_write_64_impl inside pg_atomic_init_u64. That works correctly, but on platforms without 64bit single copy atomicity it could trigger spurious valgrind errors about uninitialized memory, because we use compare_and_swap for atomic writes on such platforms. I previously suppressed one instance of this problem (6c878edc1df), but as Tom reports that wasn't enough. As the atomic variable cannot yet be concurrently accessible during initialization, it seems better to have pg_atomic_init_64_impl set the value directly. Change pg_atomic_init_u32_impl for symmetry. Reported-By: Tom Lane Author: Andres Freund Discussion: https://postgr.es/m/1714601.1591503815@sss.pgh.pa.us Backpatch: 9.5-
* Fix locking bugs that could corrupt pg_control.Thomas Munro2020-06-08
| | | | | | | | | | | | | | | | | | The redo routines for XLOG_CHECKPOINT_{ONLINE,SHUTDOWN} must acquire ControlFileLock before modifying ControlFile->checkPointCopy, or the checkpointer could write out a control file with a bad checksum. Likewise, XLogReportParameters() must acquire ControlFileLock before modifying ControlFile and calling UpdateControlFile(). Back-patch to all supported releases. Author: Nathan Bossart <bossartn@amazon.com> Author: Fujii Masao <masao.fujii@oss.nttdata.com> Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/70BF24D6-DC51-443F-B55A-95735803842A%40amazon.com
* MSVC: Avoid warning when testing a TAP suite without PROVE_FLAGS.Noah Misch2020-06-07
| | | | | | | Commit 7be5d8df1f74b78620167d3abf32ee607e728919 surfaced the logic error, which had no functional implications, by adding "use warnings". The buildfarm always customizes PROVE_FLAGS, so the warning did not appear there. Back-patch to 9.5 (all supported versions).
* Try to read data from the socket in pqSendSome's write_failed paths.Tom Lane2020-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even when we've concluded that we have a hard write failure on the socket, we should continue to try to read data. This gives us an opportunity to collect any final error message that the backend might have sent before closing the connection; moreover it is the job of pqReadData not pqSendSome to close the socket once EOF is detected. Due to an oversight in 1f39a1c06, pqSendSome failed to try to collect data in the case where we'd already set write_failed. The problem was masked for ordinary query operations (which really only make one write attempt anyway), but COPY to the server would continue to send data indefinitely after a mid-COPY connection loss. Hence, add pqReadData calls into the paths where pqSendSome drops data because of write_failed. If we've lost the connection, this will eventually result in closing the socket and setting CONNECTION_BAD, which will cause PQputline and siblings to report failure, allowing the application to terminate the COPY sooner. (Basically this restores what happened before 1f39a1c06.) There are related issues that this does not solve; for example, if the backend sends an error but doesn't drop the connection, we did and still will keep pumping COPY data as long as the application sends it. Fixing that will require application-visible behavior changes though, and anyway it's an ancient behavior that we've had few complaints about. For now I'm just trying to fix the regression from 1f39a1c06. Per a complaint from Andres Freund. Back-patch into v12 where 1f39a1c06 came in. Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
* Refresh function name in CRC-associated Valgrind suppressions.Noah Misch2020-06-05
| | | | | | | | | Back-patch to 9.5, where commit 4f700bcd20c087f60346cb8aefd0e269be8e2157 first appeared. Reviewed by Tom Lane. Reported by Andrew Dunstan. Discussion: https://postgr.es/m/4dfabec2-a3ad-0546-2d62-f816c97edd0c@2ndQuadrant.com
* Add unlikely() to CHECK_FOR_INTERRUPTS()Joe Conway2020-06-05
| | | | | | | Add the unlikely() branch hint macro to CHECK_FOR_INTERRUPTS(). Backpatch to REL_10_STABLE where we first started using unlikely(). Discussion: https://www.postgresql.org/message-id/flat/8692553c-7fe8-17d9-cbc1-7cddb758f4c6%40joeconway.com
* Use query collation, not column's collation, while examining statistics.Tom Lane2020-06-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5e0928005 changed the planner so that, instead of blindly using DEFAULT_COLLATION_OID when invoking operators for selectivity estimation, it would use the collation of the column whose statistics we're considering. This was recognized as still being not quite the right thing, but it seemed like a good incremental improvement. However, shortly thereafter we introduced nondeterministic collations, and that creates cases where operators can fail if they're passed the wrong collation. We don't want planning to fail in cases where the query itself would work, so this means that we *must* use the query's collation when invoking operators for estimation purposes. The only real problem this creates is in ineq_histogram_selectivity, where the binary search might produce a garbage answer if we perform comparisons using a different collation than the column's histogram is ordered with. However, when the query's collation is significantly different from the column's default collation, the estimate we previously generated would be pretty irrelevant anyway; so it's not clear that this will result in noticeably worse estimates in practice. (A follow-on patch will improve this situation in HEAD, but it seems too invasive for back-patch.) The patch requires changing the signatures of mcv_selectivity and allied functions, which are exported and very possibly are used by extensions. In HEAD, I just did that, but an API/ABI break of this sort isn't acceptable in stable branches. Therefore, in v12 the patch introduces "mcv_selectivity_ext" and so on, with signatures matching HEAD, and makes the old functions into wrappers that assume DEFAULT_COLLATION_OID should be used. That does not match the prior behavior, but it should avoid risk of failure in most cases. (In practice, I think most extension datatypes aren't collation-aware, so the change probably doesn't matter to them.) Per report from James Lucas. Back-patch to v12 where the problem was introduced. Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
* Preserve pg_index.indisreplident across REINDEX CONCURRENTLYMichael Paquier2020-06-05
| | | | | | | | | | | If the flag value is lost, logical decoding would work the same way as REPLICA IDENTITY NOTHING, meaning that no old tuple values would be included in the changes anymore produced by logical decoding. Author: Michael Paquier Reviewed-by: Euler Taveira Discussion: https://postgr.es/m/20200603065340.GK89559@paquier.xyz Backpatch-through: 12
* Reject "23:59:60.nnn" in datetime input.Tom Lane2020-06-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It's intentional that we don't allow values greater than 24 hours, while we do allow "24:00:00" as well as "23:59:60" as inputs. However, the range check was miscoded in such a way that it would accept "23:59:60.nnn" with a nonzero fraction. For time or timetz, the stored result would then be greater than "24:00:00" which would fail dump/reload, not to mention possibly confusing other operations. Fix by explicitly calculating the result and making sure it does not exceed 24 hours. (This calculation is redundant with what will happen later in tm2time or tm2timetz. Maybe someday somebody will find that annoying enough to justify refactoring to avoid the duplication; but that seems too invasive for a back-patched bug fix, and the cost is probably unmeasurable anyway.) Note that this change also rejects such input as the time portion of a timestamp(tz) value. Back-patch to v10. The bug is far older, but to change this pre-v10 we'd need to ensure that the logic behaves sanely with float timestamps, which is possibly nontrivial due to roundoff considerations. Doesn't really seem worth troubling with. Per report from Christoph Berg. Discussion: https://postgr.es/m/20200520125807.GB296739@msg.df7cb.de
* Fix instance of elog() called while holding a spinlockMichael Paquier2020-06-04
| | | | | | | | This broke the project rule to not call any complex code while a spinlock is held. Issue introduced by b89e151. Discussion: https://postgr.es/m/20200602.161518.1399689010416646074.horikyota.ntt@gmail.com Backpatch-through: 9.5
* Don't call palloc() while holding a spinlock, either.Tom Lane2020-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix some more violations of the "only straight-line code inside a spinlock" rule. These are hazardous not only because they risk holding the lock for an excessively long time, but because it's possible for palloc to throw elog(ERROR), leaving a stuck spinlock behind. copy_replication_slot() had two separate places that did pallocs while holding a spinlock. We can make the code simpler and safer by copying the whole ReplicationSlot struct into a local variable while holding the spinlock, and then referencing that copy. (While that's arguably more cycles than we really need to spend holding the lock, the struct isn't all that big, and this way seems far more maintainable than copying fields piecemeal. Anyway this is surely much cheaper than a palloc.) That bug goes back to v12. InvalidateObsoleteReplicationSlots() not only did a palloc while holding a spinlock, but for extra sloppiness then leaked the memory --- probably for the lifetime of the checkpointer process, though I didn't try to verify that. Fortunately that silliness is new in HEAD. pg_get_replication_slots() had a cosmetic violation of the rule, in that it only assumed it's safe to call namecpy() while holding a spinlock. Still, that's a hazard waiting to bite somebody, and there were some other cosmetic coding-rule violations in the same function, so clean it up. I back-patched this as far as v10; the code exists before that but it looks different, and this didn't seem important enough to adapt the patch further back. Discussion: https://postgr.es/m/20200602.161518.1399689010416646074.horikyota.ntt@gmail.com
* Fix use-after-release mistake in currtid() and currtid2() for viewsMichael Paquier2020-06-01
| | | | | | | | | | This issue has been present since the introduction of this code as of a3519a2 from 2002, and has been found by buildfarm member prion that uses RELCACHE_FORCE_RELEASE via the tests introduced recently in e786be5. Discussion: https://postgr.es/m/20200601022055.GB4121@paquier.xyz Backpatch-through: 9.5
* Fix crashes with currtid() and currtid2()Michael Paquier2020-06-01
| | | | | | | | | | | | | | | | | | | | | | | A relation that has no storage initializes rd_tableam to NULL, which caused those two functions to crash because of a pointer dereference. Note that in 11 and older versions, this has always failed with a confusing error "could not open file". These two functions are used by the Postgres ODBC driver, which requires them only when connecting to a backend strictly older than 8.1. When connected to 8.2 or a newer version, the driver uses a RETURNING clause instead whose support has been added in 8.2, so it should be possible to just remove both functions in the future. This is left as an issue to address later. While on it, add more regression tests for those functions as we never really had coverage for them, and for aggregates of TIDs. Reported-by: Jaime Casanova, via sqlsmith Author: Michael Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/CAJGNTeO93u-5APMga6WH41eTZ3Uee9f3s8dCpA-GSSqNs1b=Ug@mail.gmail.com Backpatch-through: 12
* Make install-tests target work with vpath buildsAndrew Dunstan2020-05-31
| | | | | | | | Also add a top-level install-tests target. Backpatch to all live branches. Craig Ringer, tweaked by me.
* llvmjit: Fix building against LLVM 11 by removing unnecessary include.Andres Freund2020-05-28
| | | | | | | | | LLVM has removed this header, in the branch that will become llvm 11. But as it turns out we didn't actually need it, so just remove it. Author: Jesse Zhang <sbjesse@gmail.com> Discussion: https://postgr.es/m/CAGf+fX7bvtP0YXMu7pOsu_NwhxW6dArTkxb=jt7M2-UJkyJ_3g@mail.gmail.com Backpatch: 11, where JIT support using llvm was introduced.
* Add CHECK_FOR_INTERRUPTS() to the repeat() functionJoe Conway2020-05-28
| | | | | | | | | | | The repeat() function loops for potentially a long time without ever checking for interrupts. This prevents, for example, a query cancel from interrupting until the work is all done. Fix by inserting a CHECK_FOR_INTERRUPTS() into the loop. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/flat/8692553c-7fe8-17d9-cbc1-7cddb758f4c6%40joeconway.com
* Add missing error code to "cannot attach index ..." error.Heikki Linnakangas2020-05-28
| | | | | | | | ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE was used in an ereport with the same message but different errdetail a few lines earlier, so use that here as well. Backpatch-through: 11
* Fix typo in test comment.Heikki Linnakangas2020-05-28
| | | | | The same comment was copied to a few different places, with the same typo. Backpatch down to v11, where this typo was introduced.
* Add lcov exclusion markers to jsonpath scannerPeter Eisentraut2020-05-26
| | | | | This was done for all scanners in 421167362242ce1fb46d6d720798787e7cd65aad but not added to the new one.
* gss: add missing references to hostgssenc and hostnogssencBruce Momjian2020-05-25
| | | | | | | | | | | | | These were missed when these were added to pg_hba.conf in PG 12; updates docs and pg_hba.conf.sample. Reported-by: Arthur Nascimento Bug: 16380 Discussion: https://postgr.es/m/20200421182736.GG19613@momjian.us Backpatch-through: 12
* Fix two typos in a commentAlvaro Herrera2020-05-22
| | | | They were introduced in 898e5e3290a7; backpatch to 12.
* Fix MSVC installations with multiple "configure" files detectedMichael Paquier2020-05-21
| | | | | | | | | | | | | | | | When installing binaries and libraries using the MSVC installation routines, the operation gets done after moving to the root folder, whose location is detected by checking if "configure" exists two times in a row. So, calling the installation script from src/tools/msvc/ with an extra "configure" file four levels up the root path of the code tree causes the execution to go further up, leading to a failure in finding the builds. This commit fixes the issue by moving to the root folder of the code tree only once, when necessary. Author: Arnold Müller Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/16343-f638f67e7e52b86c@postgresql.org Backpatch-through: 9.5
* Fix comment in slot.c.Amit Kapila2020-05-18
| | | | | | | | Reported-by: Sawada Masahiko Author: Sawada Masahiko Reviewed-by: Amit Kapila Backpatch-through: 9.5 Discussion: https://postgr.es/m/CA+fd4k4Ws7M7YQ8PqSym5WB1y75dZeBTd1sZJUQdfe0KJQ-iSA@mail.gmail.com
* Fix assertion with relation using REPLICA IDENTITY FULL in subscriberMichael Paquier2020-05-16
| | | | | | | | | | | | | | | | In a logical replication subscriber, a table using REPLICA IDENTITY FULL which has a primary key would try to use the primary key's index available to scan for a tuple, but an assertion only assumed as correct the case of an index associated to REPLICA IDENTITY USING INDEX. This commit corrects the assertion so as the use of a primary key index is a valid case. Reported-by: Dilip Kumar Analyzed-by: Dilip Kumar Author: Euler Taveira Reviewed-by: Michael Paquier, Masahiko Sawada Discussion: https://postgr.es/m/CAFiTN-u64S5bUiPL1q5kwpHNd0hRnf1OE-bzxNiOs5zo84i51w@mail.gmail.com Backpatch-through: 10
* Fix bogus initialization of replication origin shared memory state.Tom Lane2020-05-15
| | | | | | | | | | | | | | | | | The previous coding zeroed out offsetof(ReplicationStateCtl, states) more bytes than it was entitled to, as a consequence of starting the zeroing from the wrong pointer (or, if you prefer, using the wrong calculation of how much to zero). It's unsurprising that this has not caused any reported problems, since it can be expected that the newly-allocated block is at the end of what we've used in shared memory, and we always make the shmem block substantially bigger than minimally necessary. Nonetheless, this is wrong and it could bite us someday; plus it's a dangerous model for somebody to copy. This dates back to the introduction of this code (commit 5aa235042), so back-patch to all supported branches.
* Avoid killing btree items that are already deadAlvaro Herrera2020-05-15
| | | | | | | | | | | | | | | | | | | | | | _bt_killitems marks btree items dead when a scan leaves the page where they live, but it does so with only share lock (to improve concurrency). This was historicall okay, since killing a dead item has no consequences. However, with the advent of data checksums and wal_log_hints, this action incurs a WAL full-page-image record of the page. Multiple concurrent processes would write the same page several times, leading to WAL bloat. The probability of this happening can be reduced by only killing items if they're not already dead, so change the code to do that. The problem could eliminated completely by having _bt_killitems upgrade to exclusive lock upon seeing a killable item, but that would reduce concurrency so it's considered a cure worse than the disease. Backpatch all the way back to 9.5, since wal_log_hints was introduced in 9.4. Author: Masahiko Sawada <masahiko.sawada@2ndquadrant.com> Discussion: https://postgr.es/m/CA+fd4k6PeRj2CkzapWNrERkja5G0-6D-YQiKfbukJV+qZGFZ_Q@mail.gmail.com
* Move check for fsync=off so that pendingOps still gets cleared.Heikki Linnakangas2020-05-14
| | | | | | | | | | | | Commit 3eb77eba5a moved the loop and refactored it, and inadvertently changed the effect of fsync=off so that it also skipped removing entries from the pendingOps table. That was not intentional, and leads to an assertion failure if you turn fsync on while the server is running and reload the config. Backpatch-through: 12- Reviewed-By: Thomas Munro Discussion: https://www.postgresql.org/message-id/3cbc7f4b-a5fa-56e9-9591-c886deb07513%40iki.fi
* Fix the MSVC build for versions 2015 and later.Amit Kapila2020-05-14
| | | | | | | | | | | | | | | | | | | Visual Studio 2015 and later versions should still be able to do the same as Visual Studio 2012, but the declaration of locale_name is missing in _locale_t, causing the code compilation to fail, hence this falls back instead on to enumerating all system locales by using EnumSystemLocalesEx to find the required locale name.  If the input argument is in Unix-style then we can get ISO Locale name directly by using GetLocaleInfoEx() with LCType as LOCALE_SNAME. In passing, change the documentation references of the now obsolete links. Note that this problem occurs only with NLS enabled builds. Author: Juan José Santamaría Flecha, Davinder Singh and Amit Kapila Reviewed-by: Ranier Vilela and Amit Kapila Backpatch-through: 9.5 Discussion: https://postgr.es/m/CAHzhFSFoJEWezR96um4-rg5W6m2Rj9Ud2CNZvV4NWc9tXV7aXQ@mail.gmail.com
* Fix pg_recvlogical avoidance of superfluous Standby Status Update.Noah Misch2020-05-13
| | | | | | | | | | | The defect suppressed a Standby Status Update message when bytes flushed to disk had changed but bytes received had not changed. If pg_recvlogical then exited with no intervening Standby Status Update, the next pg_recvlogical repeated already-flushed records. The defect could also cause superfluous messages, which are functionally harmless. Back-patch to 9.5 (all supported versions). Discussion: https://postgr.es/m/20200502221647.GA3941274@rfd.leadboat.com
* In successful pg_recvlogical, end PGRES_COPY_OUT cleanly.Noah Misch2020-05-13
| | | | | | | | | | | | | | | | | pg_recvlogical merely called PQfinish(), so the backend sent messages after the disconnect. When that caused EPIPE in internal_flush(), before a LogicalConfirmReceivedLocation(), the next pg_recvlogical would repeat already-acknowledged records. Whether or not the defect causes EPIPE, post-disconnect messages could contain an ErrorResponse that the user should see. One properly ends PGRES_COPY_OUT by repeating PQgetCopyData() until it returns a negative value. Augment one of the tests to cover the case of WAL past --endpos. Back-patch to v10, where commit 7c030783a5bd07cadffc2a1018bc33119a4c7505 first appeared. Before that commit, pg_recvlogical never reached PGRES_COPY_OUT. Reported by Thomas Munro. Discussion: https://postgr.es/m/CAEepm=1MzM2Z_xNe4foGwZ1a+MO_2S9oYDq3M5D11=JDU_+0Nw@mail.gmail.com
* Stamp 12.3.REL_12_3Tom Lane2020-05-11
|
* Translation updatesPeter Eisentraut2020-05-11
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 60bf9b5caac08d0483f6f92ebf9ef2e0eef5b6bb
* Prevent archive recovery from scanning non-existent WAL files.Fujii Masao2020-05-09
| | | | | | | | | | | | | | | | | | | | | Previously when there were multiple timelines listed in the history file of the recovery target timeline, archive recovery searched all of them, starting from the newest timeline to the oldest one, to find the segment to read. That is, archive recovery had to continuously fail scanning the segment until it reached the timeline that the segment belonged to. These scans for non-existent segment could be harmful on the recovery performance especially when archival area was located on the remote storage and each scan could take a long time. To address the issue, this commit changes archive recovery so that it skips scanning the timeline that the segment to read doesn't belong to. Per discussion, back-patch to all supported versions. Author: Kyotaro Horiguchi, tweaked a bit by Fujii Masao Reviewed-by: David Steele, Pavel Suderevsky, Grigory Smolkin Discussion: https://postgr.es/m/16159-f5a34a3a04dc67e0@postgresql.org Discussion: https://postgr.es/m/20200129.120222.1476610231001551715.horikyota.ntt@gmail.com
* pg_restore: Provide file name with one failure messageAlvaro Herrera2020-05-08
| | | | | | | | | | | Almost all error messages already include file name where relevant, but this one had been overlooked. Repair. Backpatch to 9.5. Author: Euler Taveira <euler.taveira@2ndquadrant.com> Discussion: https://postgr.es/m/CAH503wA_VOrcKL_43p9atRejCDYmOZ8MzfK9S6TJrQqBqNeAXA@mail.gmail.com Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
* Fix several DDL issues of generated columns versus inheritancePeter Eisentraut2020-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | Several combinations of generated columns and inheritance in CREATE TABLE were not handled correctly. Specifically: - Disallow a child column specifying a generation expression if the parent column is a generated column. The child column definition must be unadorned and the parent column's generation expression will be copied. - Prohibit a child column of a generated parent column specifying default values or identity. - Allow a child column of a not-generated parent column specifying itself as a generated column. This previously did not work, but it was possible to arrive at the state via other means (involving ALTER TABLE), so it seems sensible to support it. Add tests for each case. Also add documentation about the rules involving generated columns and inheritance. Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us https://www.postgresql.org/message-id/flat/2678bad1-048f-519a-ef24-b12962f41807%40enterprisedb.com https://www.postgresql.org/message-id/flat/CAJvUf_u4h0DxkCMCeEKAWCuzGUTnDP-G5iVmSwxLQSXn0_FWNQ%40mail.gmail.com
* Propagate ALTER TABLE ... SET STORAGE to indexesPeter Eisentraut2020-05-08
| | | | | | | | | When creating a new index, the attstorage setting of the table column is copied to regular (non-expression) index columns. But a later ALTER TABLE ... SET STORAGE is not propagated to indexes, thus creating an inconsistent and undumpable state. Discussion: https://www.postgresql.org/message-id/flat/9765d72b-37c0-06f5-e349-2a580aafd989%402ndquadrant.com
* Report missing wait event for timeline history file.Fujii Masao2020-05-08
| | | | | | | | | | | | | | | | TimelineHistoryRead and TimelineHistoryWrite wait events are reported during waiting for a read and write of a timeline history file, respectively. However, previously, TimelineHistoryRead wait event was not reported while readTimeLineHistory() was reading a timeline history file. Also TimelineHistoryWrite was not reported while writeTimeLineHistory() was writing one line with the details of the timeline split, at the end. This commit fixes these issues. Back-patch to v10 where wait events for a timeline history file was added. Author: Masahiro Ikeda Reviewed-by: Michael Paquier, Fujii Masao Discussion: https://postgr.es/m/d11b0c910b63684424e06772eb844ab5@oss.nttdata.com
* Fix YA text phrase search bug.Tom Lane2020-05-07
| | | | | | | | | | | | | | | | | | | checkcondition_str() failed to report multiple matches for a prefix pattern correctly: it would dutifully merge the match positions, but then after exiting that loop, if the last prefix-matching word had had no suitable positions, it would report there were no matches. The upshot would be failing to recognize a match that the query should match. It looks like you need all of these conditions to see the bug: * a phrase search (else we don't ask for match position details) * a prefix search item (else we don't get to this code) * a weight restriction (else checkclass_str won't fail) Noted while investigating a problem report from Pavel Borisov, though this is distinct from the issue he was on about. Back-patch to 9.6 where phrase search was added.
* Heed lock protocol in DROP OWNED BYAlvaro Herrera2020-05-06
| | | | | | | | | | | | | | | We were acquiring object locks then deleting objects one by one, instead of acquiring all object locks first, ignoring those that did not exist, and then deleting all objects together. The latter is the correct protocol to use, and what this commits changes to code to do. Failing to follow that leads to "cache lookup failed for relation XYZ" error reports when DROP OWNED runs concurrently with other DDL -- for example, a session termination that removes some temp tables. Author: Álvaro Herrera Reported-by: Mithun Chicklore Yogendra (Mithun CY) Reviewed-by: Ahsan Hadi, Tom Lane Discussion: https://postgr.es/m/CADq3xVZTbzK4ZLKq+dn_vB4QafXXbmMgDP3trY-GuLnib2Ai1w@mail.gmail.com
* Handle spaces for Python install location in MSVC scriptsMichael Paquier2020-05-06
| | | | | | | | | | Attempting to use an installation path of Python that includes spaces caused the MSVC builds to fail. This fixes the issue by using the same quoting method as ad7595b for OpenSSL. Author: Victor Wagner Discussion: https://postgr.es/m/20200430150608.6dc6b8c4@antares.wagner.home Backpatch-through: 9.5
* Fix severe memory leaks in GSSAPI encryption support.Tom Lane2020-05-05
| | | | | | | | | | | | | | | | | Both the backend and libpq leaked buffers containing encrypted data to be transmitted, so that the process size would grow roughly as the total amount of data sent. There were also far-less-critical leaks of the same sort in GSSAPI session establishment. Oversight in commit b0b39f72b, which I failed to notice while reviewing the code in 2c0cdc818. Per complaint from pmc@citylink. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/20200504115649.GA77072@gate.oper.dinoex.org
* Fix GSS client to non-GSS server connectionStephen Frost2020-05-02
| | | | | | | | | | | | | | | | | | If the client is compiled with GSSAPI support and tries to start up GSS with the server, but the server is not compiled with GSSAPI support, we would mistakenly end up falling through to call ProcessStartupPacket with secure_done = true, but the client might then try to perform SSL, which the backend wouldn't understand and we'd end up failing the connection with: FATAL: unsupported frontend protocol 1234.5679: server supports 2.0 to 3.0 Fix by arranging to track ssl_done independently from gss_done, instead of trying to use the same boolean for both. Author: Andrew Gierth Discussion: https://postgr.es/m/87h82kzwqn.fsf@news-spur.riddles.org.uk Backpatch: 12-, where GSSAPI encryption was added.
* Get rid of trailing semicolons in C macro definitions.Tom Lane2020-05-01
| | | | | | | | | | | | | | | | | | | | | | | Writing a trailing semicolon in a macro is almost never the right thing, because you almost always want to write a semicolon after each macro call instead. (Even if there was some reason to prefer not to, pgindent would probably make a hash of code formatted that way; so within PG the rule should basically be "don't do it".) Thus, if we have a semi inside the macro, the compiler sees "something;;". Much of the time the extra empty statement is harmless, but it could lead to mysterious syntax errors at call sites. In perhaps an overabundance of neatnik-ism, let's run around and get rid of the excess semicolons whereever possible. The only thing worse than a mysterious syntax error is a mysterious syntax error that only happens in the back branches; therefore, backpatch these changes where relevant, which is most of them because most of these mistakes are old. (The lack of reported problems shows that this is largely a hypothetical issue, but still, it could bite us in some future patch.) John Naylor and Tom Lane Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
* Clear up issue with FSM and oldest bpto.xact.Peter Geoghegan2020-05-01
| | | | | | | | | | | | | | | | | On further reflection, code comments added by commit b0229f26 slightly misrepresented how we determine the oldest bpto.xact for the index. btvacuumpage() does not treat the bpto.xact of a page that it put in the FSM as a candidate to be the oldest deleted page (the delete-marked page that has the oldest bpto.xact XID among all pages encountered). The definition of a deleted page for the purposes of the bpto.xact calculation is different from the definition used by the bulk delete statistics. The bulk delete statistics don't distinguish between pages that were deleted by the current VACUUM, pages deleted by a previous VACUUM operation but not yet recyclable/reusable, and pages that are reusable (though reusable pages are counted separately). Backpatch: 11-, just like commit b0229f26.
* Fix undercounting in VACUUM VERBOSE output.Peter Geoghegan2020-05-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic for determining how many nbtree pages in an index are deleted pages sometimes undercounted pages. Pages that were deleted by the current VACUUM operation (as opposed to some previous VACUUM operation whose deleted pages have yet to be reused) were sometimes overlooked. The final count is exposed to users through VACUUM VERBOSE's "%u index pages have been deleted" output. btvacuumpage() avoided double-counting when _bt_pagedel() deleted more than one page by assuming that only one page was deleted, and that the additional deleted pages would get picked up during a future call to btvacuumpage() by the same VACUUM operation. _bt_pagedel() can legitimately delete pages that the btvacuumscan() scan will not visit again, though, so that assumption was slightly faulty. Fix the accounting by teaching _bt_pagedel() about its caller's requirements. It now only reports on pages that it knows btvacuumscan() won't visit again (including the current btvacuumpage() page), so everything works out in the end. This bug has been around forever. Only backpatch to v11, though, to keep _bt_pagedel() is sync on the branches that have today's bugfix commit b0229f26da. Note that this commit changes the signature of _bt_pagedel(), just like commit b0229f26da. Author: Peter Geoghegan Reviewed-By: Masahiko Sawada Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com Backpatch: 11-
* Fix bug in nbtree VACUUM "skip full scan" feature.Peter Geoghegan2020-05-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the index from btcleanup in situations where it doesn't seem worth it) made VACUUM maintain the oldest btpo.xact among all deleted pages for the index as a whole. It failed to handle all the details surrounding pages that are deleted by the current VACUUM operation correctly (though pages deleted by some previous VACUUM operation were processed correctly). The most immediate problem was that the special area of the page was examined without a buffer pin at one point. More fundamentally, the handling failed to account for the full range of _bt_pagedel() behaviors. For example, _bt_pagedel() sometimes deletes internal pages in passing, as part of deleting an entire subtree with btvacuumpage() caller's page as the leaf level page. The original leaf page passed to _bt_pagedel() might not be the page that it deletes first in cases where deletion can take place. It's unclear how disruptive this bug may have been, or what symptoms users might want to look out for. The issue was spotted during unrelated code review. To fix, push down the logic for maintaining the oldest btpo.xact to _bt_pagedel(). btvacuumpage() is now responsible for pages that were fully deleted by a previous VACUUM operation, while _bt_pagedel() is now responsible for pages that were deleted by the current VACUUM operation (this includes half-dead pages from a previous interrupted VACUUM operation that become fully deleted in _bt_pagedel()). Note that _bt_pagedel() should never encounter an existing deleted page. This commit theoretically breaks the ABI of a stable release by changing the signature of _bt_pagedel(). However, if any third party extension is actually affected by this, then it must already be completely broken (since there are numerous assumptions made in _bt_pagedel() that cannot be met outside of VACUUM). It seems highly unlikely that such an extension actually exists, in any case. Author: Peter Geoghegan Reviewed-By: Masahiko Sawada Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com Backpatch: 11-, where the "skip full scan" feature was introduced.
* Fix bogus tar-file padding logic for standby.signal.Robert Haas2020-04-27
| | | | | | | | | | | | | When pg_basebackup -R is used, we inject standby.signal into the tar file for the main tablespace. The proper thing to do is to pad each file injected into the tar file out to a 512-byte boundary by appending nulls, but here the file is of length 0 and we add 511 zero bytes. Since 0 is already a multiple of 512, we should not add any zero bytes. Do that instead. Patch by me, reviewed by Tom Lane. Discussion: http://postgr.es/m/CA+TgmobWbfReO9-XFk8urR1K4wTNwqoHx_v56t7=T8KaiEoKNw@mail.gmail.com
* Fix full text search to handle NOT above a phrase search correctly.Tom Lane2020-04-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Queries such as '!(foo<->bar)' failed to find matching rows when implemented as a GiST or GIN index search. That's because of failing to handle phrase searches as tri-valued when considering a query without any position information for the target tsvector. We can only say that the phrase operator might match, not that it does match; and therefore its NOT also might match. The previous coding incorrectly inverted the approximate phrase result to decide that there was certainly no match. To fix, we need to make TS_phrase_execute return a real ternary result, and then bubble that up accurately in TS_execute. As long as we have to do that anyway, we can simplify the baroque things TS_phrase_execute was doing internally to manage tri-valued searching with only a bool as explicit result. For now, I left the externally-visible result of TS_execute as a plain bool. There do not appear to be any outside callers that need to distinguish a three-way result, given that they passed in a flag saying what to do in the absence of position data. This might need to change someday, but we wouldn't want to back-patch such a change. Although tsginidx.c has its own TS_execute_ternary implementation for use at upper index levels, that sadly managed to get this case wrong as well :-(. Fixing it is a lot easier fortunately. Per bug #16388 from Charles Offenbacher. Back-patch to 9.6 where phrase search was introduced. Discussion: https://postgr.es/m/16388-98cffba38d0b7e6e@postgresql.org
* Fix error case for CREATE ROLE ... IN ROLE.Andrew Gierth2020-04-25
| | | | | | | | | | | | | | | | | | | | | | | | CreateRole() was passing a Value node, not a RoleSpec node, for the newly-created role name when adding the role as a member of existing roles for the IN ROLE syntax. This mistake went unnoticed because the node in question is used only for error messages and is not accessed on non-error paths. In older pg versions (such as 9.5 where this was found), this results in an "unexpected node type" error in place of the real error. That node type check was removed at some point, after which the code would accidentally fail to fail on 64-bit platforms (on which accessing the Value node as if it were a RoleSpec would be mostly harmless) or give an "unexpected role type" error on 32-bit platforms. Fix the code to pass the correct node type, and add an lfirst_node assertion just in case. Per report on irc from user m1chelangelo. Backpatch all the way, because this error has been around for a long time.
* Update Windows timezone name list to include currently-known zones.Tom Lane2020-04-24
| | | | | | Thanks to Juan José Santamaría Flecha. Discussion: https://postgr.es/m/5752.1587740484@sss.pgh.pa.us