aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* In the pg_upgrade test suite, don't write to src/test/regress.Noah Misch2019-05-19
| | | | | | | | | | | | | | | | | | | | When this suite runs installcheck, redirect file creations from src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a race condition in "make -j check-world". If the pg_upgrade suite wrote to a given src/test/regress/results file in parallel with the regular src/test/regress invocation writing it, a test failed spuriously. Even without parallelism, in "make -k check-world", the suite finishing second overwrote the other's regression.diffs. This revealed test "largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too. Buildfarm client REL_10, released forty-five days ago, supports saving regression.diffs from its new location. When an older client reports a pg_upgradeCheck failure, it will no longer include regression.diffs. Back-patch to 9.5, where pg_upgrade moved to src/bin. Reviewed by Andrew Dunstan. Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
* Fix misuse of an integer as a bool.Tom Lane2019-05-13
| | | | | | | | | | | | | | | | | | | pgtls_read_pending is declared to return bool, but what the underlying SSL_pending function returns is a count of available bytes. This is actually somewhat harmless if we're using C99 bools, but in the back branches it's a live bug: if the available-bytes count happened to be a multiple of 256, it would get converted to a zero char value. On machines where char is signed, counts of 128 and up could misbehave as well. The net effect is that when using SSL, libpq might block waiting for data even though some has already been received. Broken by careless refactoring in commit 4e86f1b16, so back-patch to 9.5 where that came in. Per bug #15802 from David Binderman. Discussion: https://postgr.es/m/15802-f0911a97f0346526@postgresql.org
* Fix misoptimization of "{1,1}" quantifiers in regular expressions.Tom Lane2019-05-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A bounded quantifier with m = n = 1 might be thought a no-op. But according to our documentation (which traces back to Henry Spencer's original man page) it still imposes greediness, or non-greediness in the case of the non-greedy variant "{1,1}?", on whatever it's attached to. This turns out not to work though, because parseqatom() optimizes away the m = n = 1 case without regard for whether it's supposed to change the greediness of the argument RE. We can fix this by just not applying the optimization when the greediness needs to change; the subsequent general cases handle it fine. The three cases in which we can still apply the optimization are (a) no quantifier, or quantifier does not impose a preference; (b) atom has no greediness property, implying it cannot match a variable amount of text anyway; or (c) quantifier's greediness is same as atom's. Note that in most cases where one of these applies, we'd have exited earlier in the "not a messy case" fast path. I think it's now only possible to get to the optimization when the atom involves capturing parentheses or a non-top-level backref. Back-patch to all supported branches. I'd ordinarily be hesitant to put a subtle behavioral change into back branches, but in this case it's very hard to see a reason why somebody would write "{1,1}?" unless they're trying to get the documented change-of-greediness behavior. Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
* Fail pgwin32_message_to_UTF16() for SQL_ASCII messages.Noah Misch2019-05-12
| | | | | | | | | | | | The function had been interpreting SQL_ASCII messages as UTF8, throwing an error when they were invalid UTF8. The new behavior is consistent with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to write() and ReportEventA(). On buildfarm member bowerbird, enabling log_connections caused an error whenever the role name was not valid UTF8. Back-patch to 9.4 (all supported versions). Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
* Rearrange pgstat_bestart() to avoid failures within its critical section.Tom Lane2019-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We long ago decided to design the shared PgBackendStatus data structure to minimize the cost of writing status updates, which means that writers just have to increment the st_changecount field twice. That isn't hooked into any sort of resource management mechanism, which means that if something were to throw error between the two increments, the st_changecount field would be left odd indefinitely. That would cause readers to lock up. Now, since it's also a bad idea to leave the field odd for longer than absolutely necessary (because readers will spin while we have it set), the expectation was that we'd treat these segments like spinlock critical sections, with only short, more or less straight-line, code in them. That was fine as originally designed, but commit 9029f4b37 broke it by inserting a significant amount of non-straight-line code into pgstat_bestart(), code that is very capable of throwing errors, not to mention taking a significant amount of time during which readers will spin. We have a report from Neeraj Kumar of readers actually locking up, which I suspect was due to an encoding conversion error in X509_NAME_to_cstring, though conceivably it was just a garden-variety OOM failure. Subsequent commits have loaded even more dubious code into pgstat_bestart's critical section (and commit fc70a4b0d deserves some kind of booby prize for managing to miss the critical section entirely, although the negative consequences seem minimal given that the PgBackendStatus entry should be seen by readers as inactive at that point). The right way to fix this mess seems to be to compute all these values into a local copy of the process' PgBackendStatus struct, and then just copy the data back within the critical section proper. This plan can't be implemented completely cleanly because of the struct's heavy reliance on out-of-line strings, which we must initialize separately within the critical section. But still, the critical section is far smaller and safer than it was before. In hopes of forestalling future errors of the same ilk, rename the macros for st_changecount management to make it more apparent that the writer-side macros create a critical section. And to prevent the worst consequences if we nonetheless manage to mess it up anyway, adjust those macros so that they really are a critical section, ie they now bump CritSectionCount. That doesn't add much overhead, and it guarantees that if we do somehow throw an error while the counter is odd, it will lead to PANIC and a database restart to reset shared memory. Back-patch to 9.5 where the problem was introduced. In HEAD, also fix an oversight in commit b0b39f72b: it failed to teach pgstat_read_current_status to copy st_gssstatus data from shared memory to local memory. Hence, subsequent use of that data within the transaction would potentially see changing data that it shouldn't see. Discussion: https://postgr.es/m/CAPR3Wj5Z17=+eeyrn_ZDG3NQGYgMEOY6JV6Y-WRRhGgwc16U3Q@mail.gmail.com
* Fix error reporting in reindexdbMichael Paquier2019-05-11
| | | | | | | | | | | | | When failing to reindex a table or an index, reindexdb would generate an extra error message related to a database failure, which is misleading. Backpatch all the way down, as this has been introduced by 85e9a5a0. Discussion: https://postgr.es/m/CAOBaU_Yo61RwNO3cW6WVYWwH7EYMPuexhKqufb2nFGOdunbcHw@mail.gmail.com Author: Julien Rouhaud Reviewed-by: Daniel Gustafsson, Álvaro Herrera, Tom Lane, Michael Paquier Backpatch-through: 9.4
* Cope with EINVAL and EIDRM shmat() failures in PGSharedMemoryAttach.Tom Lane2019-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's a very old race condition in our code to see whether a pre-existing shared memory segment is still in use by a conflicting postmaster: it's possible for the other postmaster to remove the segment in between our shmctl() and shmat() calls. It's a narrow window, and there's no risk unless both postmasters are using the same port number, but that's possible during parallelized "make check" tests. (Note that while the TAP tests take some pains to choose a randomized port number, pg_regress doesn't.) If it does happen, we treated that as an unexpected case and errored out. To fix, allow EINVAL to be treated as segment-not-present, and the same for EIDRM on Linux. AFAICS, the considerations here are basically identical to the checks for acceptable shmctl() failures, so I documented and coded it that way. While at it, adjust PGSharedMemoryAttach's API to remove its undocumented dependency on UsedShmemSegAddr in favor of passing the attach address explicitly. This makes it easier to be sure we're using a null shmaddr when probing for segment conflicts (thus avoiding questions about what EINVAL means). I don't think there was a bug there, but it required fragile assumptions about the state of UsedShmemSegAddr during PGSharedMemoryIsInUse. Commit c09850992 may have made this failure more probable by applying the conflicting-segment tests more often. Hence, back-patch to all supported branches, as that was. Discussion: https://postgr.es/m/22224.1557340366@sss.pgh.pa.us
* Fix error status of vacuumdb when multiple jobs are usedMichael Paquier2019-05-09
| | | | | | | | | | | When running a batch of VACUUM or ANALYZE commands on a given database, there were cases where it is possible to have vacuumdb not report an error where it actually should, leading to incorrect status results. Author: Julien Rouhaud Reviewed-by: Amit Kapila, Michael Paquier Discussion: https://postgr.es/m/CAOBaU_ZuTwz7CtqLYJ1Ouuh272bTQPLN8b1bAPk0bCBm4PDMTQ@mail.gmail.com Backpatch-through: 9.5
* Remove leftover reference to old "flat file" mechanism in a comment.Heikki Linnakangas2019-05-08
| | | | The flat file mechanism was removed in PostgreSQL 9.0.
* Remove some code related to 7.3 and older servers from tools of src/bin/Michael Paquier2019-05-07
| | | | | | | | | | This code was broken as of 582edc3, and is most likely not used anymore. Note that pg_dump supports servers down to 8.0, and psql has code to support servers down to 7.4. Author: Julien Rouhaud Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAOBaU_Y5y=zo3+2gf+2NJC1pvMYPcbRXoQaPXx=U7+C8Qh4CzQ@mail.gmail.com
* Stamp 9.5.17.REL9_5_17Tom Lane2019-05-06
|
* Translation updatesPeter Eisentraut2019-05-06
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 499248e4e6cd0dea44450fb13352e7a03fccb00e
* Use checkAsUser for selectivity estimator checks, if it's set.Dean Rasheed2019-05-06
| | | | | | | | | | | | | | | | | | | | | | | | | In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.
* Fix security checks for selectivity estimation functions with RLS.Dean Rasheed2019-05-06
| | | | | | | | | | | | | | | | | | | | | | In commit e2d4ef8de8, security checks were added to prevent user-supplied operators from running over data from pg_statistic unless the user has table or column privileges on the table, or the operator is leakproof. For a table with RLS, however, checking for table or column privileges is insufficient, since that does not guarantee that the user has permission to view all of the column's data. Fix this by also checking for securityQuals on the RTE, and insisting that the operator be leakproof if there are any. Thus the leakproofness check will only be skipped if there are no securityQuals and the user has table or column privileges on the table -- i.e., only if we know that the user has access to all the data in the column. Back-patch to 9.5 where RLS was added. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Security: CVE-2019-10130
* Remove reindex_catalog test from test schedules.Andres Freund2019-05-05
| | | | | | | | | | | | | | | | | | | As the test currently causes occasional deadlocks (due to the schema cleanup from previous sessions potentially still running), and the patch from f912d7dec2 has gotten a fair bit of buildfarm coverage, remove the test from the test schedules. There's a set of minor releases coming up. Leave the tests in place, so it can manually be run using EXTRA_TESTS. For now also leave it in master, as there's no imminent release, and there's plenty (re-)index related work in 12. But we'll have to disable it before long there too, unless somebody comes up with simple enough fixes for the deadlock (I'm about to post a vague idea to the list). Discussion: https://postgr.es/m/4622.1556982247@sss.pgh.pa.us Backpatch: 9.4-11 (no master!)
* Fix reindexing of pg_class indexes some more.Tom Lane2019-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 3dbb317d3 et al failed under CLOBBER_CACHE_ALWAYS testing. Investigation showed that to reindex pg_class_oid_index, we must suppress accesses to the index (via SetReindexProcessing) before we call RelationSetNewRelfilenode, or at least before we do CommandCounterIncrement therein; otherwise, relcache reloads happening within the CCI may try to fetch pg_class rows using the index's new relfilenode value, which is as yet an empty file. Of course, the point of 3dbb317d3 was that that ordering didn't work either, because then RelationSetNewRelfilenode's own update of the index's pg_class row cannot access the index, should it need to. There are various ways we might have got around that, but Andres Freund came up with a brilliant solution: for a mapped index, we can really just skip the pg_class update altogether. The only fields it was actually changing were relpages etc, but it was just setting them to zeroes which is useless make-work. (Correct new values will be installed at the end of index build.) All pg_class indexes are mapped and probably always will be, so this eliminates the problem by removing work rather than adding it, always a pleasant outcome. Having taught RelationSetNewRelfilenode to do it that way, we can revert the code reordering in reindex_index. (But I left the moved setup code where it was; there seems no reason why it has to run without use of the old index. If you're trying to fix a busted pg_class index, you'll have had to disable system index use altogether to get this far.) Moreover, this means we don't need RelationSetIndexList at all, because reindex_relation's hacking to make "REINDEX TABLE pg_class" work is likewise now unnecessary. We'll leave that code in place in the back branches, but a follow-on patch will remove it in HEAD. In passing, do some minor cleanup for commit 5c1560606 (in HEAD only), notably removing a duplicate newrnode assignment. Patch by me, using a core idea due to Andres Freund. Back-patch to all supported branches, as 3dbb317d3 was. Discussion: https://postgr.es/m/28926.1556664156@sss.pgh.pa.us
* Run catalog reindexing test from 3dbb317d32 serially, to avoid deadlocks.Andres Freund2019-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | The tests turn out to cause deadlocks in some circumstances. Fairly reproducibly so with -DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE. Some of the deadlocks may be hard to fix without disproportionate measures, but others probably should be fixed - but not in 12. We discussed removing the new tests until we can fix the issues underlying the deadlocks, but results from buildfarm animal markhor (which runs with CLOBBER_CACHE_ALWAYS) indicates that there might be a more severe, as of yet undiagnosed, issue (including on stable branches) with reindexing catalogs. The failure is: ERROR: could not read block 0 in file "base/16384/28025": read only 0 of 8192 bytes Therefore it seems advisable to keep the tests. It's not certain that running the tests in isolation removes the risk of deadlocks. It's possible that additional locks are needed to protect against a concurrent auto-analyze or such. Per discussion with Tom Lane. Discussion: https://postgr.es/m/28926.1556664156@sss.pgh.pa.us Backpatch: 9.4-, like 3dbb317d3
* Fix unused variable compiler warning in !debug builds.Andres Freund2019-04-30
| | | | | | | | Introduced in 3dbb317d3. Fix by using the new local variable in more places. Reported-By: Bruce Momjian (off-list) Backpatch: 9.4-, like 3dbb317d3
* Fix potential assertion failure when reindexing a pg_class index.Andres Freund2019-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When reindexing individual indexes on pg_class it was possible to either trigger an assertion failure: TRAP: FailedAssertion("!(!ReindexIsProcessingIndex(((index)->rd_id))) That's because reindex_index() called SetReindexProcessing() - which enables an asserts ensuring no index insertions happen into the index - before calling RelationSetNewRelfilenode(). That not correct for indexes on pg_class, because RelationSetNewRelfilenode() updates the relevant pg_class row, which needs to update the indexes. The are two reasons this wasn't noticed earlier. Firstly the bug doesn't trigger when reindexing all of pg_class, as reindex_relation has code "hiding" all yet-to-be-reindexed indexes. Secondly, the bug only triggers when the the update to pg_class doesn't turn out to be a HOT update - otherwise there's no index insertion to trigger the bug. Most of the time there's enough space, making this bug hard to trigger. To fix, move RelationSetNewRelfilenode() to before the SetReindexProcessing() (and, together with some other code, to outside of the PG_TRY()). To make sure the error checking intended by SetReindexProcessing() is more robust, modify CatalogIndexInsert() to check ReindexIsProcessingIndex() even when the update is a HOT update. Also add a few regression tests for REINDEXing of system catalogs. The last two improvements would have prevented some of the issues fixed in 5c1560606dc4c from being introduced in the first place. Reported-By: Michael Paquier Diagnosed-By: Tom Lane and Andres Freund Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20190418011430.GA19133@paquier.xyz Backpatch: 9.4-, the bug is present in all branches
* Portability fix for zic.c.Tom Lane2019-04-26
| | | | Missed an inttypes.h dependency in previous patch. Per buildfarm.
* Sync our copy of the timezone library with IANA release tzcode2019a.Tom Lane2019-04-26
| | | | | | | | | | | | | | | | | | | | | | This corrects a small bug in zic that caused it to output an incorrect year-2440 transition in the Africa/Casablanca zone. More interestingly, zic has grown a "-r" option that limits the range of zone transitions that it will put into the output files. That might be useful to people who don't like the weird GMT offsets that tzdb likes to use for very old dates. It appears that for dates before the cutoff time specified with -r, zic will use the zone's standard-time offset as of the cutoff time. So for example one might do make install ZIC_OPTIONS='-r @-1893456000' to cause all dates before 1910-01-01 to be treated as though 1910 standard time prevailed indefinitely far back. (Don't blame me for the unfriendly way of specifying the cutoff time --- it's seconds since or before the Unix epoch. You can use extract(epoch ...) to calculate it.) As usual, back-patch to all supported branches.
* Update time zone data files to tzdata release 2019a.Tom Lane2019-04-26
| | | | | | | | | | DST law changes in Palestine and Metlakatla. Historical corrections for Israel. Etc/UCT is now a backward-compatibility link to Etc/UTC, instead of being a separate zone that generates the abbreviation "UCT", which nowadays is typically a typo. Postgres will still accept "UCT" as an input zone name, but it won't output it.
* Fix some minor postmaster-state-machine issues.Tom Lane2019-04-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In sigusr1_handler, don't ignore PMSIGNAL_ADVANCE_STATE_MACHINE based on pmState. The restriction is unnecessary (PostmasterStateMachine should work in any state), not future-proof (since it makes too many assumptions about why the signal might be sent), and broken even today because a race condition can make it necessary to respond to the signal in PM_WAIT_READONLY state. The race condition seems unlikely, but if it did happen, a hot-standby postmaster could fail to shut down after receiving a smart-shutdown request. In MaybeStartWalReceiver, don't clear the WalReceiverRequested flag if the fork attempt fails. Leaving it set allows us to try again in future iterations of the postmaster idle loop. (The startup process would eventually send a fresh request signal, but this change may allow us to retry the fork sooner.) Remove an obsolete comment and unnecessary test in PostmasterStateMachine's handling of PM_SHUTDOWN_2 state. It's not possible to have a live walreceiver in that state, and AFAICT has not been possible since commit 5e85315ea. This isn't a live bug, but the false comment is quite confusing to readers. In passing, rearrange sigusr1_handler's CheckPromoteSignal tests so that we don't uselessly perform stat() calls that we're going to ignore the results of. Add some comments clarifying the behavior of MaybeStartWalReceiver; I very nearly rearranged it in a way that'd reintroduce the race condition fixed in e5d494d78. Mea culpa for not commenting that properly at the time. Back-patch to all supported branches. The PMSIGNAL_ADVANCE_STATE_MACHINE change is the only one of even minor significance, but we might as well keep this code in sync across branches. Discussion: https://postgr.es/m/9001.1556046681@sss.pgh.pa.us
* Fix detection of passwords hashed with MD5Michael Paquier2019-04-24
| | | | | | | | | | | | | | | | | | | This commit fixes an issue related to the way password verifiers hashed with MD5 are detected, leading to possibly detect that plain passwords are legit MD5 hashes. A MD5-hashed entry was checked based on if its header uses "md5" and if the string length matches what is expected. Unfortunately the code never checked if the hash only used hexadecimal characters after the three-character prefix. Fix 9.6 down to 9.4, where this code is present. This area of the code has changed in 10 and upwards with the introduction of SCRAM, which led to a different fix committed as of ccae190. Reported-by: Tom Lane Author: Michael Paquier Reviewed-by: Jonathan Katz Discussion: https://postgr.es/m/016deb6b-1f0a-8e9f-1833-a8675b170aa9@postgresql.org Backpatch-through: 9.4
* Repair assorted issues in locale data extraction.Tom Lane2019-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cache_locale_time (extraction of LC_TIME-related info) had never been taught the lessons we previously learned about extraction of info related to LC_MONETARY and LC_NUMERIC. Specifically, commit 95a777c61 taught PGLC_localeconv() that data coming out of localeconv() was in an encoding determined by the relevant locale, but we didn't realize that there's a similar issue with strftime(). And commit a4930e7ca hardened PGLC_localeconv() against errors occurring partway through, but failed to do likewise for cache_locale_time(). So, rearrange the latter function to perform encoding conversion and not risk failure while it's got the locales set to temporary values. This time around I also changed PGLC_localeconv() to treat it as FATAL if it can't restore the previous settings of the locale values. There is no reason (except possibly OOM) for that to fail, and proceeding with the wrong locale values seems like a seriously bad idea --- especially on Windows where we have to also temporarily change LC_CTYPE. Also, protect against the possibility that we can't identify the codeset reported for LC_MONETARY or LC_NUMERIC; rather than just failing, try to validate the data without conversion. The user-visible symptom this fixes is that if LC_TIME is set to a locale name that implies an encoding different from the database encoding, non-ASCII localized day and month names would be retrieved in the wrong encoding, leading to either unexpected encoding-conversion error reports or wrong output from to_char(). The other possible failure modes are unlikely enough that we've not seen reports of them, AFAIK. The encoding conversion problems do not manifest on Windows, since we'd already created special-case code to handle that issue there. Per report from Juan José Santamaría Flecha. Back-patch to all supported versions. Juan José Santamaría Flecha and Tom Lane Discussion: https://postgr.es/m/CAC+AXB22So5aZm2vZe+MChYXec7gWfr-n-SK-iO091R0P_1Tew@mail.gmail.com
* postgresql.conf.sample: add proper defaults for include actionsBruce Momjian2019-04-17
| | | | | | | | | | | | | Previously, include actions include_dir, include_if_exists, and include listed commented-out values which were not the defaults, which is inconsistent with other entries. Instead, replace them with '', which is the default value. Reported-by: Emanuel Araújo Discussion: https://postgr.es/m/CAMuTAkYMx6Q27wpELDR3_v9aG443y7ZjeXu15_+1nGUjhMWOJA@mail.gmail.com Backpatch-through: 9.4
* Consistently test for in-use shared memory.Noah Misch2019-04-12
| | | | | | | | | | | | | | | | | | | | | | postmaster startup scrutinizes any shared memory segment recorded in postmaster.pid, exiting if that segment matches the current data directory and has an attached process. When the postmaster.pid file was missing, a starting postmaster used weaker checks. Change to use the same checks in both scenarios. This increases the chance of a startup failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1 postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster will no longer stop if shmat() of an old segment fails with EACCES. A postmaster will no longer recycle segments pertaining to other data directories. That's good for production, but it's bad for integration tests that crash a postmaster and immediately delete its data directory. Such a test now leaks a segment indefinitely. No "make check-world" test does that. win32_shmem.c already avoided all these problems. In 9.6 and later, enhance PostgresNode to facilitate testing. Back-patch to 9.4 (all supported versions). Reviewed (in earlier versions) by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20190408064141.GA2016666@rfd.leadboat.com
* Fix off-by-one check that can lead to a memory overflow in ecpg.Michael Meskes2019-04-11
| | | | Patch by Liu Huailing <liuhuailing@cn.fujitsu.com>
* Fix backwards test in operator_precedence_warning logic.Tom Lane2019-04-10
| | | | | | | | | | Warnings about unary minus might have been wrong. It's a bit surprising that nobody noticed yet ... probably the precedence-warning feature hasn't really been used much in the field. Rikard Falkeborn Discussion: https://postgr.es/m/CADRDgG6fzA8A2oeygUw4=o7ywo4kvz26NxCSgpq22nMD73Bx4Q@mail.gmail.com
* Define WIN32_STACK_RLIMIT throughout win32 and cygwin builds.Noah Misch2019-04-09
| | | | | | | | The MSVC build system already did this, and commit 617dc6d299c957e2784320382b3277ede01d9c63 used it in a second file. Back-patch to 9.4, like that commit. Discussion: https://postgr.es/m/CAA8=A7_1SWc3+3Z=-utQrQFOtrj_DeohRVt7diA2tZozxsyUOQ@mail.gmail.com
* Avoid "could not reattach" by providing space for concurrent allocation.Noah Misch2019-04-08
| | | | | | | | | | | | | | | | | We've long had reports of intermittent "could not reattach to shared memory" errors on Windows. Buildfarm member dory fails that way when PGSharedMemoryReAttach() execution overlaps with creation of a thread for the process's "default thread pool". Fix that by providing a second region to receive asynchronous allocations that would otherwise intrude into UsedShmemSegAddr. In pgwin32_ReserveSharedMemoryRegion(), stop trying to free reservations landing at incorrect addresses; the caller's next step has been to terminate the affected process. Back-patch to 9.4 (all supported versions). Reviewed by Tom Lane. He also did much of the prerequisite research; see commit bcbf2346d69f6006f126044864dd9383d50d87b4. Discussion: https://postgr.es/m/20190402135442.GA1173872@rfd.leadboat.com
* Fix improper interaction of FULL JOINs with lateral references.Tom Lane2019-04-08
| | | | | | | | | | | | | | | | | | | | | join_is_legal() needs to reject forming certain outer joins in cases where that would lead the planner down a blind alley. However, it mistakenly supposed that the way to handle full joins was to treat them as applying the same constraints as for left joins, only to both sides. That doesn't work, as shown in bug #15741 from Anthony Skorski: given a lateral reference out of a join that's fully enclosed by a full join, the code would fail to believe that any join ordering is legal, resulting in errors like "failed to build any N-way joins". However, we don't really need to consider full joins at all for this purpose, because we effectively force them to be evaluated in syntactic order, and that order is always legal for lateral references. Hence, get rid of this broken logic for full joins and just ignore them instead. This seems to have been an oversight in commit 7e19db0c0. Back-patch to all supported branches, as that was. Discussion: https://postgr.es/m/15741-276f1f464b3f40eb@postgresql.org
* Revert "Consistently test for in-use shared memory."Noah Misch2019-04-05
| | | | | | | | | This reverts commits 2f932f71d9f2963bbd201129d7b971c8f5f077fd, 16ee6eaf80a40007a138b60bb5661660058d0422 and 6f0e190056fe441f7cf788ff19b62b13c94f68f3. The buildfarm has revealed several bugs. Back-patch like the original commits. Discussion: https://postgr.es/m/20190404145319.GA1720877@rfd.leadboat.com
* Silence -Wimplicit-fallthrough in sysv_shmem.c.Noah Misch2019-04-03
| | | | | | | | | | Commit 2f932f71d9f2963bbd201129d7b971c8f5f077fd added code that elicits a warning on buildfarm member flaviventris. Back-patch to 9.4, like that commit. Reported by Andres Freund. Discussion: https://postgr.es/m/20190404020057.galelv7by75ekqrh@alap3.anarazel.de
* Handle USE_MODULE_DB for all tests able to use an installed postmaster.Noah Misch2019-04-03
| | | | | | | | | | | | | | | | | | | | When $(MODULES) and $(MODULE_big) are empty, derive the database name from the first element of $(REGRESS) instead of using a constant string. When deriving the database name from $(MODULES), use its first element instead of the entire list; the earlier approach would fail if any multi-module directory had $(REGRESS) tests. Treat isolation suites and src/pl correspondingly. Under USE_MODULE_DB=1, installcheck-world and check-world no longer reuse any database name in a given postmaster. Buildfarm members axolotl, mandrill and frogfish saw spurious "is being accessed by other users" failures that would not have happened without database name reuse. (The CountOtherDBBackends() 5s deadline expired during DROP DATABASE; a backend for an earlier test suite had used the same database name and had not yet exited.) Back-patch to 9.4 (all supported versions), except bits pertaining to isolation suites. Concept reviewed by Andrew Dunstan, Andres Freund and Tom Lane. Discussion: https://postgr.es/m/20190401135213.GE891537@rfd.leadboat.com
* Consistently test for in-use shared memory.Noah Misch2019-04-03
| | | | | | | | | | | | | | | | | | | | | postmaster startup scrutinizes any shared memory segment recorded in postmaster.pid, exiting if that segment matches the current data directory and has an attached process. When the postmaster.pid file was missing, a starting postmaster used weaker checks. Change to use the same checks in both scenarios. This increases the chance of a startup failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1 postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster will no longer recycle segments pertaining to other data directories. That's good for production, but it's bad for integration tests that crash a postmaster and immediately delete its data directory. Such a test now leaks a segment indefinitely. No "make check-world" test does that. win32_shmem.c already avoided all these problems. In 9.6 and later, enhance PostgresNode to facilitate testing. Back-patch to 9.4 (all supported versions). Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20130911033341.GD225735@tornado.leadboat.com
* Perform RLS subquery checks as the right user when going via a view.Dean Rasheed2019-04-02
| | | | | | | | | | | | | | When accessing a table with RLS via a view, the RLS checks are performed as the view owner. However, the code neglected to propagate that to any subqueries in the RLS checks. Fix that by calling setRuleCheckAsUser() for all RLS policy quals and withCheckOption checks for RTEs with RLS. Back-patch to 9.5 where RLS was added. Per bug #15708 from daurnimator. Discussion: https://postgr.es/m/15708-d65cab2ce9b1717a@postgresql.org
* Update HINT for pre-existing shared memory block.Noah Misch2019-03-31
| | | | | | | | | | | | One should almost always terminate an old process, not use a manual removal tool like ipcrm. Removal of the ipcclean script eleven years ago (39627b1ae680cba44f6e56ca5facec4fdbfe9495) and its non-replacement corroborate that manual shm removal is now a niche goal. Back-patch to 9.4 (all supported versions). Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20180812064815.GB2301738@rfd.leadboat.com
* Have pg_upgrade's Makefile honor NO_TEMP_INSTALLAndrew Dunstan2019-03-31
| | | | | | Backpatch to 9.5, when pg_upgrade's location changed. Discussion: https://postgr.es/m/5506b8fa-7dad-8483-053c-7ca7ef04f01a@2ndQuadrant.com
* Track unowned relations in doubly-linked listTomas Vondra2019-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | Relations dropped in a single transaction are tracked in a list of unowned relations. With large number of dropped relations this resulted in poor performance at the end of a transaction, when the relations are removed from the singly linked list one by one. Commit b4166911 attempted to address this issue (particularly when it happens during recovery) by removing the relations in a reverse order, resulting in O(1) lookups in the list of unowned relations. This did not work reliably, though, and it was possible to trigger the O(N^2) behavior in various ways. Instead of trying to remove the relations in a specific order with respect to the linked list, which seems rather fragile, switch to a regular doubly linked. That allows us to remove relations cheaply no matter where in the list they are. As b4166911 was a bugfix, backpatched to all supported versions, do the same thing here. Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/flat/80c27103-99e4-1d0c-642c-d9f3b94aaa0a%402ndquadrant.com Backpatch-through: 9.4
* Fix WAL format incompatibility introduced by backpatching of 52ac6cd2d0Alexander Korotkov2019-03-24
| | | | | | | | | | | | | | | | | | | 52ac6cd2d0 added new field to ginxlogDeletePage and was backpatched to 9.4. That led to problems when patched postgres instance applies WAL records generated by non-patched one. WAL records generated by non-patched instance don't contain new field, which patched one is expecting to see. Thankfully, we can distinguish patched and non-patched WAL records by their data size. If we see that WAL record is generated by non-patched instance, we skip processing of new field. This commit comes with some assertions. In particular, if it appears that on some platform struct data size didn't change then static assertion will trigger. Reported-by: Simon Riggs Discussion: https://postgr.es/m/CANP8%2Bj%2BK4whxf7ET7%2BgO%2BG-baC3-WxqqH%3DnV4X2CgfEPA3Yu3g%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Simon Riggs, Alvaro Herrera Backpatch-through: 9.4
* Remove inadequate check for duplicate "xml" PI.Tom Lane2019-03-23
| | | | | | I failed to think about PIs starting with "xml". We don't really need this check at all, so just take it out. Oversight in commit 8d1dadb25 et al.
* Revert strlen -> strnlen optimization pre-v11.Tom Lane2019-03-23
| | | | | We don't have a src/port substitute for that function in older branches, so it fails on platforms lacking the function natively. Per buildfarm.
* Ensure xmloption = content while restoring pg_dump output.Tom Lane2019-03-23
| | | | | | | | In combination with the previous commit, this ensures that valid XML data can always be dumped and reloaded, whether it is "document" or "content". Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
* Accept XML documents when xmloption = content, as required by SQL:2006+.Tom Lane2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we were using the SQL:2003 definition, which doesn't allow this, but that creates a serious dump/restore gotcha: there is no setting of xmloption that will allow all valid XML data. Hence, switch to the 2006 definition. Since libxml doesn't accept <!DOCTYPE> directives in the mode we use for CONTENT parsing, the implementation is to detect <!DOCTYPE> in the input and switch to DOCUMENT parsing mode. This should not cost much, because <!DOCTYPE> should be close to the front of the input if it's there at all. It's possible that this causes the error messages for malformed input to be slightly different than they were before, if said input includes <!DOCTYPE>; but that does not seem like a big problem. In passing, buy back a few cycles in parsing of large XML documents by not doing strlen() of the whole input in parse_xml_decl(). Back-patch because dump/restore failures are not nice. This change shouldn't break any cases that worked before, so it seems safe to back-patch. Chapman Flack (revised a bit by me) Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
* Make checkpoint requests more robust.Tom Lane2019-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6f6a6d8b1 introduced a delay of up to 2 seconds if we're trying to request a checkpoint but the checkpointer hasn't started yet (or, much less likely, our kill() call fails). However buildfarm experience shows that that's not quite enough for slow or heavily-loaded machines. There's no good reason to assume that the checkpointer won't start eventually, so we may as well make the timeout much longer, say 60 sec. However, if the caller didn't say CHECKPOINT_WAIT, it seems like a bad idea to be waiting at all, much less for as long as 60 sec. We can remove the need for that, and make this whole thing more robust, by adjusting the code so that the existence of a pending checkpoint request is clear from the contents of shared memory, and making sure that the checkpointer process will notice it at startup even if it did not get a signal. In this way there's no need for a non-CHECKPOINT_WAIT call to wait at all; if it can't send the signal, it can nonetheless assume that the checkpointer will eventually service the request. A potential downside of this change is that "kill -INT" on the checkpointer process is no longer enough to trigger a checkpoint, should anyone be relying on something so hacky. But there's no obvious reason to do it like that rather than issuing a plain old CHECKPOINT command, so we'll assume that nobody is. There doesn't seem to be a way to preserve this undocumented quasi-feature without introducing race conditions. Since a principal reason for messing with this is to prevent intermittent buildfarm failures, back-patch to all supported branches. Discussion: https://postgr.es/m/27830.1552752475@sss.pgh.pa.us
* Ensure dummy paths have correct required_outer if rel is parameterized.Tom Lane2019-03-14
| | | | | | | | | | | | | | | | The assertions added by commits 34ea1ab7f et al found another problem: set_dummy_rel_pathlist and mark_dummy_rel were failing to label the dummy paths they create with the correct outer_relids, in case the relation is necessarily parameterized due to having lateral references in its tlist. It's likely that this has no user-visible consequences in production builds, at the moment; but still an assertion failure is a bad thing, so back-patch the fix. Per bug #15694 from Roman Zharkov (via Alexander Lakhin) and an independent report by Tushar Ahuja. Discussion: https://postgr.es/m/15694-74f2ca97e7044f7f@postgresql.org Discussion: https://postgr.es/m/7d72ab20-c725-3ce2-f99d-4e64dd8a0de6@enterprisedb.com
* Fix potential memory access violation in ecpg if filename of include file isMichael Meskes2019-03-11
| | | | | | shorter than 2 characters. Patch by: "Wu, Fei" <wufei.fnst@cn.fujitsu.com>
* Disallow NaN as a value for floating-point GUCs.Tom Lane2019-03-10
| | | | | | | | | | | | | | | | | | None of the code that uses GUC values is really prepared for them to hold NaN, but parse_real() didn't have any defense against accepting such a value. Treat it the same as a syntax error. I haven't attempted to analyze the exact consequences of setting any of the float GUCs to NaN, but since they're quite unlikely to be good, this seems like a back-patchable bug fix. Note: we don't need an explicit test for +-Infinity because those will be rejected by existing range checks. I added a regression test for that in HEAD, but not older branches because the spelling of the value in the error message will be platform-dependent in branches where we don't always use port/snprintf.c. Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
* Fix error handling of readdir() port implementation on first file lookupMichael Paquier2019-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | The implementation of readdir() in src/port/ which gets used by MSVC has been added in 399a36a, and since the beginning it considers all errors on the first file lookup as ENOENT, setting errno accordingly and letting the routine caller think that the directory is empty. While this is normally enough for the case of the backend, this can confuse callers of this routine on Windows as all errors would map to the same behavior. So, for example, even permission errors would be thought as having an empty directory, while there could be contents in it. This commit changes the error handling so as readdir() gets a behavior similar to native implementations: force errno=0 when seeing ERROR_FILE_NOT_FOUND as error and consider other errors as plain failures. While looking at the patch, I noticed that MinGW does not enforce errno=0 when looking at the first file, but it gets enforced on the next file lookups. A comment related to that was incorrect in the code. Reported-by: Yuri Kurenkov Diagnosed-by: Yuri Kurenkov, Grigory Smolkin Author: Konstantin Knizhnik Reviewed-by: Andrew Dunstan, Michael Paquier Discussion: https://postgr.es/m/2cad7829-8d66-e39c-b937-ac825db5203d@postgrespro.ru Backpatch-through: 9.4