aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Improve gendef.pl diagnostic on failure to open sym fileAndrew Dunstan2017-10-26
| | | | | | | | There have been numerous buildfarm failures but the diagnostic is currently silent about the reason for failure to open the file. Let's see if we can get to the bottom of it. Backpatch to all live branches.
* Fixed handling of escape character in libecpg.Michael Meskes2017-10-26
| | | | Patch by Tsunakawa Takayuki <tsunakawa.takay@jp.fujitsu.com>
* Fix libpq to not require user's home directory to exist.Tom Lane2017-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some people like to run libpq-using applications in environments where there's no home directory. We've broken that scenario before (cf commits 5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making it a hard error if we fail to get the home directory name while looking for ~/.pgpass. The previous precedent is that if we can't get the home directory name, we should just silently act as though the file we hoped to find there doesn't exist. Rearrange the new code to honor that. Looking around, the service-file code added by commit 41a4e4595 had the same disease. Apparently, that escaped notice because it only runs when a service name has been specified, which I guess the people who use this scenario don't do. Nonetheless, it's wrong too, so fix that case as well. Add a comment about this policy to pqGetHomeDirectory, in the probably vain hope of forestalling the same error in future. And upgrade the rather miserable commenting in parseServiceInfo, too. In passing, also back off parseServiceInfo's assumption that only ENOENT is an ignorable error from stat() when checking a service file. We would need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that the far-better-tested code for ~/.pgpass treats all stat() failures alike, I think this code ought to as well. Per bug #14872 from Dan Watson. Back-patch the .pgpass change to v10 where ba005f193 came in. The service-file bugs are far older, so back-patch the other changes to all supported branches. Discussion: https://postgr.es/m/20171025200457.1471.34504@wrigleys.postgresql.org
* Process variadic arguments consistently in json functionsAndrew Dunstan2017-10-25
| | | | | | | | | | | | json_build_object and json_build_array and the jsonb equivalents did not correctly process explicit VARIADIC arguments. They are modified to use the new extract_variadic_args() utility function which abstracts away the details of the call method. Michael Paquier, reviewed by Tom Lane and Dmitry Dolgov. Backpatch to 9.5 for the jsonb fixes and 9.4 for the json fixes, as that's where they originated.
* Add a utility function to extract variadic function argumentsAndrew Dunstan2017-10-25
| | | | | | | | | | | This is epecially useful in the case or "VARIADIC ANY" functions. The caller can get the artguments and types regardless of whether or not and explicit VARIADIC array argument has been used. The function also provides an option to convert arguments on type "unknown" to to "text". Michael Paquier and me, reviewed by Tom Lane. Backpatch to 9.4 in order to support the following json bug fix.
* Update time zone data files to tzdata release 2017c.Tom Lane2017-10-23
| | | | | | DST law changes in Fiji, Namibia, Northern Cyprus, Sudan, Tonga, and Turks & Caicos Islands. Historical corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, Namibia, and Pago Pago.
* Sync our copy of the timezone library with IANA release tzcode2017c.Tom Lane2017-10-23
| | | | | | | This is a trivial update containing only cosmetic changes. The point is just to get back to being synced with an official release of tzcode, rather than some ad-hoc point in their commit history, which is where commit 47f849a3c left it.
* Fix some oversights in expression dependency recording.Tom Lane2017-10-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | find_expr_references() neglected to record a dependency on the result type of a FieldSelect node, allowing a DROP TYPE to break a view or rule that contains such an expression. I think we'd omitted this case intentionally, reasoning that there would always be a related dependency ensuring that the DROP would cascade to the view. But at least with nested field selection expressions, that's not true, as shown in bug #14867 from Mansur Galiev. Add the dependency, and for good measure a dependency on the node's exposed collation. Likewise add a dependency on the result type of a FieldStore. I think here the reasoning was that it'd only appear within an assignment to a field, and the dependency on the field's column would be enough ... but having seen this example, I think that's wrong for nested-composites cases. Looking at nearby code, I notice we're not recording a dependency on the exposed collation of CoerceViaIO, which seems inconsistent with our choices for related node types. Maybe that's OK but I'm feeling suspicious of this code today, so let's add that; it certainly can't hurt. This patch does not do anything to protect already-existing views, only views created after it's installed. But seeing that the issue has been there a very long time and nobody noticed till now, that's probably good enough. Back-patch to all supported branches. Discussion: https://postgr.es/m/20171023150118.1477.19174@wrigleys.postgresql.org
* Fix typcache's failure to treat ranges as container types.Tom Lane2017-10-20
| | | | | | | | | | | | | | | | | Like the similar logic for arrays and records, it's necessary to examine the range's subtype to decide whether the range type can support hashing. We can omit checking the subtype for btree-defined operations, though, since range subtypes are required to have those operations. (Possibly that simplification for btree cases led us to overlook that it does not apply for hash cases.) This is only an issue if the subtype lacks hash support, which is not true of any built-in range type, but it's easy to demonstrate a problem with a range type over, eg, money: you can get a "could not identify a hash function" failure when the planner is misled into thinking that hash join or aggregation would work. This was born broken, so back-patch to all supported branches.
* Fix misparsing of non-newline-terminated pg_hba.conf files.Tom Lane2017-10-17
| | | | | | | | | | | | | | | | | | | | | | | | | | This back-patches the v10-cycle commit 1e5a5d03d into 9.3 - 9.6. I had noticed at the time that that was fixing a bug, namely that next_token() might advance *lineptr past the line-terminating '\0', but given the lack of field complaints I too easily convinced myself that the problem was only latent. It's not, because tokenize_file() decides whether there's more on the line using "strlen(lineptr)". The bug is indeed latent on a newline-terminated line, because then the newline-stripping bit in tokenize_file() means we'll have two or more consecutive '\0's in the buffer, masking the fact that we accidentally advanced over the first one. But the last line in the file might not be null-terminated, allowing the loop to see and process garbage, as reported by Mark Jones in bug #14859. The bug doesn't exist in <= 9.2; there next_token() is reading directly from a file, and termination of the outer loop relies on an feof() test not a buffer pointer check. Probably commit 7f49a67f9 can be blamed for this bug, but I didn't track it down exactly. Commit 1e5a5d03d does a bit more than the minimum needed to fix the bug, but I felt the rest of it was good cleanup, so applying it all. Discussion: https://postgr.es/m/20171017141814.8203.27280@wrigleys.postgresql.org
* Doc: fix missing explanation of default object privileges.Tom Lane2017-10-11
| | | | | | | | | | | | | | The GRANT reference page, which lists the default privileges for new objects, failed to mention that USAGE is granted by default for data types and domains. As a lesser sin, it also did not specify anything about the initial privileges for sequences, FDWs, foreign servers, or large objects. Fix that, and add a comment to acldefault() in the probably vain hope of getting people to maintain this list in future. Noted by Laurenz Albe, though I editorialized on the wording a bit. Back-patch to all supported branches, since they all have this behavior. Discussion: https://postgr.es/m/1507620895.4152.1.camel@cybertec.at
* Fix low-probability loss of NOTIFY messages due to XID wraparound.Tom Lane2017-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now async.c has used TransactionIdIsInProgress() to detect whether a notify message's source transaction is still running. However, that function has a quick-exit path that reports that XIDs before RecentXmin are no longer running. If a listening backend is doing nothing but listening, and not running any queries, there is nothing that will advance its value of RecentXmin. Once 2 billion transactions elapse, the RecentXmin check causes active transactions to be reported as not running. If they aren't committed yet according to CLOG, async.c decides they aborted and discards their messages. The timing for that is a bit tight but it can happen when multiple backends are sending notifies concurrently. The net symptom therefore is that a sufficiently-long-surviving listen-only backend starts to miss some fraction of NOTIFY traffic, but only under heavy load. The only function that updates RecentXmin is GetSnapshotData(). A brute-force fix would therefore be to take a snapshot before processing incoming notify messages. But that would add cycles, as well as contention for the ProcArrayLock. We can be smarter: having taken the snapshot, let's use that to check for running XIDs, and not call TransactionIdIsInProgress() at all. In this way we reduce the number of ProcArrayLock acquisitions from one per message to one per notify interrupt; that's the same under light load but should be a benefit under heavy load. Light testing says that this change is a wash performance-wise for normal loads. I looked around for other callers of TransactionIdIsInProgress() that might be at similar risk, and didn't find any; all of them are inside transactions that presumably have already taken a snapshot. Problem report and diagnosis by Marko Tiikkaja, patch by me. Back-patch to all supported branches, since it's been like this since 9.0. Discussion: https://postgr.es/m/20170926182935.14128.65278@wrigleys.postgresql.org
* Fix crash when logical decoding is invoked from a PL function.Tom Lane2017-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logical decoding functions do BeginInternalSubTransaction and RollbackAndReleaseCurrentSubTransaction to clean up after themselves. It turns out that AtEOSubXact_SPI has an unrecognized assumption that we always need to cancel the active SPI operation in the SPI context that surrounds the subtransaction (if there is one). That's true when the RollbackAndReleaseCurrentSubTransaction call is coming from the SPI-using function itself, but not when it's happening inside some unrelated function invoked by a SPI query. In practice the affected callers are the various PLs. To fix, record the current subtransaction ID when we begin a SPI operation, and clean up only if that ID is the subtransaction being canceled. Also, remove AtEOSubXact_SPI's assertion that it must have cleaned up the surrounding SPI context's active tuptable. That's proven wrong by the same test case. Also clarify (or, if you prefer, reinterpret) the calling conventions for _SPI_begin_call and _SPI_end_call. The memory context cleanup in the latter means that these have always had the flavor of a matched resource-management pair, but they weren't documented that way before. Per report from Ben Chobot. Back-patch to 9.4 where logical decoding came in. In principle, the SPI changes should go all the way back, since the problem dates back to commit 7ec1c5a86. But given the lack of field complaints it seems few people are using internal subtransactions in this way. So I don't feel a need to take any risks in 9.2/9.3. Discussion: https://postgr.es/m/73FBA179-C68C-4540-9473-71E865408B15@silentmedia.com
* Fix access-off-end-of-array in clog.c.Tom Lane2017-10-06
| | | | | | | | | | | | | Sloppy loop coding in set_status_by_pages() resulted in fetching one array element more than it should from the subxids[] array. The odds of this resulting in SIGSEGV are pretty small, but we've certainly seen that happen with similar mistakes elsewhere. While at it, we can get rid of an extra TransactionIdToPage() calculation per loop. Per report from David Binderman. Back-patch to all supported branches, since this code is quite old. Discussion: https://postgr.es/m/HE1PR0802MB2331CBA919CBFFF0C465EB429C710@HE1PR0802MB2331.eurprd08.prod.outlook.com
* Fix traversal of half-frozen update chainsAlvaro Herrera2017-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When some tuple versions in an update chain are frozen due to them being older than freeze_min_age, the xmax/xmin trail can become broken. This breaks HOT (and probably other things). A subsequent VACUUM can break things in more serious ways, such as leaving orphan heap-only tuples whose root HOT redirect items were removed. This can be seen because index creation (or REINDEX) complain like ERROR: XX000: failed to find parent tuple for heap-only tuple at (0,7) in table "t" Because of relfrozenxid contraints, we cannot avoid the freezing of the early tuples, so we must cope with the results: whenever we see an Xmin of FrozenTransactionId, consider it a match for whatever the previous Xmax value was. This problem seems to have appeared in 9.3 with multixact changes, though strictly speaking it seems unrelated. Since 9.4 we have commit 37484ad2a "Change the way we mark tuples as frozen", so the fix is simple: just compare the raw Xmin (still stored in the tuple header, since freezing merely set an infomask bit) to the Xmax. But in 9.3 we rewrite the Xmin value to FrozenTransactionId, so the original value is lost and we have nothing to compare the Xmax with. To cope with that case we need to compare the Xmin with FrozenXid, assume it's a match, and hope for the best. Sadly, since you can pg_upgrade a 9.3 instance containing half-frozen pages to newer releases, we need to keep the old check in newer versions too, which seems a bit brittle; I hope we can somehow get rid of that. I didn't optimize the new function for performance. The new coding is probably a bit slower than before, since there is a function call rather than a straight comparison, but I'd rather have it work correctly than be fast but wrong. This is a followup after 20b655224249 fixed a few related problems. Apparently, in 9.6 and up there are more ways to get into trouble, but in 9.3 - 9.5 I cannot reproduce a problem anymore with this patch, so there must be a separate bug. Reported-by: Peter Geoghegan Diagnosed-by: Peter Geoghegan, Michael Paquier, Daniel Wood, Yi Wen Wong, Álvaro Discussion: https://postgr.es/m/CAH2-Wznm4rCrhFAiwKPWTpEw2bXDtgROZK7jWWGucXeH3D1fmA@mail.gmail.com
* Fix more user-visible elog() calls.Robert Haas2017-10-05
| | | | | | | | | Michael Paquier discovered that this could be triggered via SQL; give a nicer message instead. Patch by Michael Paquier, reviewed by Masahiko Sawada. Discussion: http://postgr.es/m/CAB7nPqQtPg+LKKtzdKN26judHcvPZ0s1gNigzOT4j8CYuuuBYg@mail.gmail.com
* Fix coding rules violations in walreceiver.cAlvaro Herrera2017-10-03
| | | | | | | | | | | | | | | | | | | | | 1. Since commit b1a9bad9e744 we had pstrdup() inside a spinlock-protected critical section; reported by Andreas Seltenreich. Turn those into strlcpy() to stack-allocated variables instead. Backpatch to 9.6. 2. Since commit 9ed551e0a4fd we had a pfree() uselessly inside a spinlock-protected critical section. Tom Lane noticed in code review. Move down. Backpatch to 9.6. 3. Since commit 64233902d22b we had GetCurrentTimestamp() (a kernel call) inside a spinlock-protected critical section. Tom Lane noticed in code review. Move it up. Backpatch to 9.2. 4. Since commit 1bb2558046cc we did elog(PANIC) while holding spinlock. Tom Lane noticed in code review. Release spinlock before dying. Backpatch to 9.2. Discussion: https://postgr.es/m/87h8vhtgj2.fsf@ansel.ydns.eu
* Use a longer connection timeout in pg_isready test.Tom Lane2017-10-01
| | | | | | | | | | | Buildfarm members skink and sungazer have both recently failed this test, with symptoms indicating that the default 3-second timeout isn't quite enough for those very slow systems. There's no reason to be miserly with this timeout, so boost it to 60 seconds. Back-patch to all versions containing this test. That may be overkill, because the failure has only been observed in the v10 branch, but I don't feel like having to revisit this later.
* Fix freezing of a dead HOT-updated tupleAlvaro Herrera2017-09-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vacuum calls page-level HOT prune to remove dead HOT tuples before doing liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples. But concurrent transaction commit/abort may turn DEAD some of the HOT tuples that survived the prune, before HeapTupleSatisfiesVacuum tests them. This happens to activate the code that decides to freeze the tuple ... which resuscitates it, duplicating data. (This is especially bad if there's any unique constraints, because those are now internally violated due to the duplicate entries, though you won't know until you try to REINDEX or dump/restore the table.) One possible fix would be to simply skip doing anything to the tuple, and hope that the next HOT prune would remove it. But there is a problem: if the tuple is older than freeze horizon, this would leave an unfrozen XID behind, and if no HOT prune happens to clean it up before the containing pg_clog segment is truncated away, it'd later cause an error when the XID is looked up. Fix the problem by having the tuple freezing routines cope with the situation: don't freeze the tuple (and keep it dead). In the cases that the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag so that there is no need to look up the XID in pg_clog later on. An isolation test is included, authored by Michael Paquier, loosely based on Daniel Wood's original reproducer. It only tests one particular scenario, though, not all the possible ways for this problem to surface; it be good to have a more reliable way to test this more fully, but it'd require more work. In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql I outlined another test case (more closely matching Dan Wood's) that exposed a few more ways for the problem to occur. Backpatch all the way back to 9.3, where this problem was introduced by multixact juggling. In branches 9.3 and 9.4, this includes a backpatch of commit e5ff9fefcd50 (of 9.5 era), since the original is not correctable without matching the coding pattern in 9.5 up. Reported-by: Daniel Wood Diagnosed-by: Daniel Wood Reviewed-by: Yi Wen Wong, Michaël Paquier Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
* Fix behavior when converting a float infinity to numeric.Tom Lane2017-09-27
| | | | | | | | | | | | | | | | | | | | | | | | | | float8_numeric() and float4_numeric() failed to consider the possibility that the input is an IEEE infinity. The results depended on the platform-specific behavior of sprintf(): on most platforms you'd get something like ERROR: invalid input syntax for type numeric: "inf" but at least on Windows it's possible for the conversion to succeed and deliver a finite value (typically 1), due to a nonstandard output format from sprintf and lack of syntax error checking in these functions. Since our numeric type lacks the concept of infinity, a suitable conversion is impossible; the best thing to do is throw an explicit error before letting sprintf do its thing. While at it, let's use snprintf not sprintf. Overrunning the buffer should be impossible if sprintf does what it's supposed to, but this is cheap insurance against a stack smash if it doesn't. Problem reported by Taiki Kondo. Patch by me based on fix suggestion from KaiGai Kohei. Back-patch to all supported branches. Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
* Improve wording of error message added in commit 714805010.Tom Lane2017-09-26
| | | | | | | Per suggestions from Peter Eisentraut and David Johnston. Back-patch, like the previous commit. Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org
* Fix failure-to-read-man-page in commit 899bd785c.Tom Lane2017-09-26
| | | | | | | | | | | | | | | | | | | | posix_fallocate() is not quite a drop-in replacement for fallocate(), because it is defined to return the error code as its function result, not in "errno". I (tgl) missed this because RHEL6's version seems to set errno as well. That is not the case on more modern Linuxen, though, as per buildfarm results. Aside from fixing the return-convention confusion, remove the test for ENOSYS; we expect that glibc will mask that for posix_fallocate, though it does not for fallocate. Keep the test for EINTR, because POSIX specifies that as a possible result, and buildfarm results suggest that it can happen in practice. Back-patch to 9.4, like the previous commit. Thomas Munro Discussion: https://postgr.es/m/1002664500.12301802.1471008223422.JavaMail.yahoo@mail.yahoo.com
* Avoid SIGBUS on Linux when a DSM memory request overruns tmpfs.Tom Lane2017-09-25
| | | | | | | | | | | | | | | | | | | | | On Linux, shared memory segments created with shm_open() are backed by swap files created in tmpfs. If the swap file needs to be extended, but there's no tmpfs space left, you get a very unfriendly SIGBUS trap. To avoid this, force allocation of the full request size when we create the segment. This adds a few cycles, but none that we wouldn't expend later anyway, assuming the request isn't hugely bigger than the actual need. Make this code #ifdef __linux__, because (a) there's not currently a reason to think the same problem exists on other platforms, and (b) applying posix_fallocate() to an FD created by shm_open() isn't very portable anyway. Back-patch to 9.4 where the DSM code came in. Thomas Munro, per a bug report from Amul Sul Discussion: https://postgr.es/m/1002664500.12301802.1471008223422.JavaMail.yahoo@mail.yahoo.com
* Fix saving and restoring umaskPeter Eisentraut2017-09-23
| | | | | | In two cases, we set a different umask for some piece of code and restore it afterwards. But if the contained code errors out, the umask is not restored. So add TRY/CATCH blocks to fix that.
* Sync our copy of the timezone library with IANA tzcode master.Tom Lane2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch absorbs a few unreleased fixes in the IANA code. It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a in https://github.com/eggert/tz. Non-cosmetic changes include: TZDEFRULESTRING is updated to match current US DST practice, rather than what it was over ten years ago. This only matters for interpretation of POSIX-style zone names (e.g., "EST5EDT"), and only if the timezone database doesn't include either an exact match for the zone name or a "posixrules" entry. The latter should not be true in any current Postgres installation, but this could possibly matter when using --with-system-tzdata. Get rid of a nonportable use of "++var" on a bool var. This is part of a larger fix that eliminates some vestigial support for consecutive leap seconds, and adds checks to the "zic" compiler that the data files do not specify that. Remove a couple of ancient compatibility hacks. The IANA crew think these are obsolete, and I tend to agree. But perhaps our buildfarm will think different. Back-patch to all supported branches, in line with our policy that all branches should be using current IANA code. Before v10, this includes application of current pgindent rules, to avoid whitespace problems in future back-patches. Discussion: https://postgr.es/m/E1dsWhf-0000pT-F9@gemulon.postgresql.org
* Give a better error for duplicate entries in VACUUM/ANALYZE column list.Tom Lane2017-09-21
| | | | | | | | | | | | | | | | | | Previously, the code didn't think about this case and would just try to analyze such a column twice. That would fail at the point of inserting the second version of the pg_statistic row, with obscure error messsages like "duplicate key value violates unique constraint" or "tuple already updated by self", depending on context and PG version. We could allow the case by ignoring duplicate column specifications, but it seems better to reject it explicitly. The bogus error messages seem like arguably a bug, so back-patch to all supported versions. Nathan Bossart, per a report from Michael Paquier, and whacked around a bit by me. Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
* Fixed ECPG to correctly handle out-of-scope cursor declarations with pointersMichael Meskes2017-09-18
| | | | or array variables.
* Allow rel_is_distinct_for() to look through RelabelType below OpExpr.Tom Lane2017-09-17
| | | | | | | | | This lets it do the right thing for, eg, varchar columns. Back-patch to 9.5 where this logic appeared. David Rowley, per report from Kim Rose Carlsen Discussion: https://postgr.es/m/VI1PR05MB17091F9A9876528055D6A827C76D0@VI1PR05MB1709.eurprd05.prod.outlook.com
* Fix possible dangling pointer dereference in trigger.c.Tom Lane2017-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AfterTriggerEndQuery correctly notes that the query_stack could get repalloc'd during a trigger firing, but it nonetheless passes the address of a query_stack entry to afterTriggerInvokeEvents, so that if such a repalloc occurs, afterTriggerInvokeEvents is already working with an obsolete dangling pointer while it scans the rest of the events. Oops. The only code at risk is its "delete_ok" cleanup code, so we can prevent unsafe behavior by passing delete_ok = false instead of true. However, that could have a significant performance penalty, because the point of passing delete_ok = true is to not have to re-scan possibly a large number of dead trigger events on the next time through the loop. There's more than one way to skin that cat, though. What we can do is delete all the "chunks" in the event list except the last one, since we know all events in them must be dead. Deleting the chunks is work we'd have had to do later in AfterTriggerEndQuery anyway, and it ends up saving rescanning of just about the same events we'd have gotten rid of with delete_ok = true. In v10 and HEAD, we also have to be careful to mop up any per-table after_trig_events pointers that would become dangling. This is slightly annoying, but I don't think that normal use-cases will traverse this code path often enough for it to be a performance problem. It's pretty hard to hit this in practice because of the unlikelihood of the query_stack getting resized at just the wrong time. Nonetheless, it's definitely a live bug of ancient standing, so back-patch to all supported branches. Discussion: https://postgr.es/m/2891.1505419542@sss.pgh.pa.us
* Fix translatable stringAlvaro Herrera2017-09-07
| | | | Discussion: https://postgr.es/m/20170828130545.sdajqlpr37hmmd6a@alvherre.pgsql
* Fix macro-redefinition warning on MSVC.Tom Lane2017-09-03
| | | | | | | | | | | In commit 9d6b160d7, I tweaked pg_config.h.win32 to use "#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty, for consistency with what happens in an autoconf'd build. But Solution.pm injects another definition of that macro into ecpg_config.h, leading to justifiable (though harmless) compiler whining. Make that one consistent too. Back-patch, like the previous patch. Discussion: https://postgr.es/m/CAEepm=1dWsXROuSbRg8PbKLh0S=8Ou-V8sr05DxmJOF5chBxqQ@mail.gmail.com
* Make [U]INT64CONST safe for use in #if conditions.Tom Lane2017-09-01
| | | | | | | | | | | | | | | | | | | | | Instead of using a cast to force the constant to be the right width, assume we can plaster on an L, UL, LL, or ULL suffix as appropriate. The old approach to this is very hoary, dating from before we were willing to require compilers to have working int64 types. This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX constants safe to use in preprocessor conditions, where a cast doesn't work. Other symbolic constants that might be defined using [U]INT64CONST are likewise safer than before. Also fix the SIZE_MAX macro to be similarly safe, if we are forced to provide a definition for that. The test added in commit 2e70d6b5e happens to do what we want even with the hack "(size_t) -1" definition, but we could easily get burnt on other tests in future. Back-patch to all supported branches, like the previous commits. Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
* Ensure SIZE_MAX can be used throughout our code.Tom Lane2017-09-01
| | | | | | | | | | | | | | | | Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX. We have a couple of places using the hack "(size_t) -1" as a fallback, but it wasn't universally available; which means the code added in commit 2e70d6b5e fails to compile everywhere. Move that hack to c.h so that we can rely on having SIZE_MAX everywhere. Per discussion, it'd be a good idea to make the macro's value safe for use in #if-tests, but that will take a bit more work. This is just a quick expedient to get the buildfarm green again. Back-patch to all supported branches, like the previous commit. Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
* Teach libpq to detect integer overflow in the row count of a PGresult.Tom Lane2017-08-29
| | | | | | | | | | | | | | | | | | | | | | | | | | Adding more than 1 billion rows to a PGresult would overflow its ntups and tupArrSize fields, leading to client crashes. It'd be desirable to use wider fields on 64-bit machines, but because all of libpq's external APIs use plain "int" for row counters, that's going to be hard to accomplish without an ABI break. Given the lack of complaints so far, and the general pain that would be involved in using such huge PGresults, let's settle for just preventing the overflow and reporting a useful error message if it does happen. Also, for a couple more lines of code we can increase the threshold of trouble from INT_MAX/2 to INT_MAX rows. To do that, refactor pqAddTuple() to allow returning an error message that replaces the default assumption that it failed because of out-of-memory. Along the way, fix PQsetvalue() so that it reports all failures via pqInternalNotice(). It already did so in the case of bad field number, but neglected to report anything for other error causes. Because of the potential for crashes, this seems like a back-patchable bug fix, despite the lack of field reports. Michael Paquier, per a complaint from Igor Korot. Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
* Stamp 9.5.9.REL9_5_9Tom Lane2017-08-28
|
* Translation updatesPeter Eisentraut2017-08-28
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: bb30ba75db8403a9ce4fb8ba6b7c3fe42ac4069e
* Fix outdated commentPeter Eisentraut2017-08-23
| | | | Author: Thomas Munro <thomas.munro@enterprisedb.com>
* Fix translation markerPeter Eisentraut2017-08-23
| | | | | This was erroneously removed in 55a70a023c3daefca9bbd68bfbe6862af10ab479.
* Backpatch introduction of TupleDescAttr(tupdesc, i).Andres Freund2017-08-22
| | | | | | | | | | | | | 2cd70845240 / c6293249d change the way individual attributes in a TupleDesc are stored / accessed. To reduce the effort of making extensions compatible with postgresql 11, and to ease future backpatching, backpatch introduction of TupleDescAttr() to all releases. Do not backpatch change in storage, as that'd be a breaking change for existing and working extensions. Author: Andres Freund Discussion: https://postgr.es/m/20170820181723.tdswdinzptbcwhrr@alap3.anarazel.de Backpatch: 9.2-
* Fix possible core dump in parallel restore when using a TOC list.Tom Lane2017-08-19
| | | | | | | | | | | | | | | | | | | | | | | | | Commit 3eb9a5e7c unintentionally introduced an ordering dependency into restore_toc_entries_prefork(). The existing coding of reduce_dependencies() contains a check to skip moving a TOC entry to the ready_list if it wasn't initially in the pending_list. This used to suffice to prevent reduce_dependencies() from trying to move anything into the ready_list during restore_toc_entries_prefork(), because the pending_list stayed empty throughout that phase; but it no longer does. The problem doesn't manifest unless the TOC has been reordered by SortTocFromFile, which is how I missed it in testing. To fix, just add a test for ready_list == NULL, converting the call with NULL from a poor man's sanity check into an explicit command not to touch TOC items' list membership. Clarify some of the comments around this; in particular, note the primary purpose of the check for pending_list membership, which is to ensure that we can't try to restore the same item twice, in case a TOC list forces it to be restored before its dependency count goes to zero. Per report from Fabrízio de Royes Mello. Back-patch to 9.3, like the previous commit. Discussion: https://postgr.es/m/CAFcNs+pjuv0JL_x4+=71TPUPjdLHOXA4YfT32myj_OrrZb4ohA@mail.gmail.com
* Further tweaks to compiler flags for PL/Perl on Windows.Tom Lane2017-08-17
| | | | | | | | | | | | | | | | It now emerges that we can only rely on Perl to tell us we must use -D_USE_32BIT_TIME_T if it's Perl 5.13.4 or later. For older versions, revert to our previous practice of assuming we need that symbol in all 32-bit Windows builds. This is not ideal, but inquiring into which compiler version Perl was built with seems far too fragile. In any case, we had not previously had complaints about these old Perl versions, so let's assume this is Good Enough. (It's still better than the situation ante commit 5a5c2feca, in that at least the effects are confined to PL/Perl rather than the whole PG build.) Back-patch to all supported versions, like 5a5c2feca and predecessors. Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
* Changed ecpg parser to allow RETURNING clauses without attached C variables.Michael Meskes2017-08-16
|
* Initialize replication_slot_catalog_xmin in procarrayPeter Eisentraut2017-08-15
| | | | | | | | | | | Although not confirmed and probably rare, if the newly allocated memory is not already zero, this could possibly have caused some problems. Also reorder the initializations slightly so they match the order of the struct definition. Author: Wong, Yi Wen <yiwong@amazon.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
* Include foreign tables in information_schema.table_privilegesPeter Eisentraut2017-08-15
| | | | | | | | This appears to have been an omission in the original commit 0d692a0dc9f. All related information_schema views already include foreign tables. Reported-by: Nicolas Thauvin <nicolas.thauvin@dalibo.com>
* Handle elog(FATAL) during ROLLBACK more robustly.Tom Lane2017-08-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stress testing by Andreas Seltenreich disclosed longstanding problems that occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are trying to execute a ROLLBACK of an already-failed transaction. In such a case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction would skip AbortTransaction and go straight to CleanupTransaction. This led to an assert failure in an assert-enabled build (due to the ROLLBACK's portal still having a cleanup hook) or without assertions, to a FATAL exit complaining about "cannot drop active portal". The latter's not disastrous, perhaps, but it's messy enough to want to improve it. We don't really want to run all of AbortTransaction in this code path. The minimum required to clean up the open portal safely is to do AtAbort_Memory and AtAbort_Portals. It seems like a good idea to do AtAbort_Memory unconditionally, to be entirely sure that we are starting with a safe CurrentMemoryContext. That means that if the main loop in AbortOutOfAnyTransaction does nothing, we need an extra step at the bottom to restore CurrentMemoryContext = TopMemoryContext, which I chose to do by invoking AtCleanup_Memory. This'll result in calling AtCleanup_Memory twice in many of the paths through this function, but that seems harmless and reasonably inexpensive. The original motivation for the assertion in AtCleanup_Portals was that we wanted to be sure that any user-defined code executed as a consequence of the cleanup hook runs during AbortTransaction not CleanupTransaction. That still seems like a valid concern, and now that we've seen one case of the assertion firing --- which means that exactly that would have happened in a production build --- let's replace the Assert with a runtime check. If we see the cleanup hook still set, we'll emit a WARNING and just drop the hook unexecuted. This has been like this a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/877ey7bmun.fsf@ansel.ydns.eu
* Absorb -D_USE_32BIT_TIME_T switch from Perl, if relevant.Tom Lane2017-08-14
| | | | | | | | | | | | | | | | | | | | | | | | Commit 3c163a7fc's original choice to ignore all #define symbols whose names begin with underscore turns out to be too simplistic. On Windows, some Perl installations are built with -D_USE_32BIT_TIME_T, and we must absorb that or we get the wrong result for sizeof(PerlInterpreter). This effectively re-reverts commit ef58b87df, which injected that symbol in a hacky way, making it apply to all of Postgres not just PL/Perl. More significantly, it did so on *all* 32-bit Windows builds, even when the Perl build to be used did not select this option; so that it fails to work properly with some newer Perl builds. By making this change, we would be introducing an ABI break in 32-bit Windows builds; but fortunately we have not used type time_t in any exported Postgres APIs in a long time. So it should be OK, both for PL/Perl itself and for third-party extensions, if an extension library is built with a different _USE_32BIT_TIME_T setting than the core code. Patch by me, based on research by Ashutosh Sharma and Robert Haas. Back-patch to all supported branches, as commit 3c163a7fc was. Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
* Remove AtEOXact_CatCache().Tom Lane2017-08-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sole useful effect of this function, to check that no catcache entries have positive refcounts at transaction end, has really been obsolete since we introduced ResourceOwners in PG 8.1. We reduced the checks to assertions years ago, so that the function was a complete no-op in production builds. There have been previous discussions about removing it entirely, but consensus up to now was that it had some small value as a cross-check for bugs in the ResourceOwner logic. However, it now emerges that it's possible to trigger these assertions if you hit an assert-enabled backend with SIGTERM during a call to SearchCatCacheList, because that function temporarily increases the refcounts of entries it's intending to add to a catcache list construct. In a normal ERROR scenario, the extra refcounts are cleaned up by SearchCatCacheList's PG_CATCH block; but in a FATAL exit we do a transaction abort and exit without ever executing PG_CATCH handlers. There's a case to be made that this is a generic hazard and we should consider restructuring elog(FATAL) handling so that pending PG_CATCH handlers do get run. That's pretty scary though: it could easily create more problems than it solves. Preliminary stress testing by Andreas Seltenreich suggests that there are not many live problems of this ilk, so we rejected that idea. There are more-localized ways to fix the problem; the most principled one would be to use PG_ENSURE_ERROR_CLEANUP instead of plain PG_TRY. But adding cycles to SearchCatCacheList isn't very appealing. We could also weaken the assertions in AtEOXact_CatCache in some more or less ad-hoc way, but that just makes its raison d'etre even less compelling. In the end, the most reasonable solution seems to be to just remove AtEOXact_CatCache altogether, on the grounds that it's not worth trying to fix it. It hasn't found any bugs for us in many years. Per report from Jeevan Chalke. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAM2+6=VEE30YtRQCZX7_sCFsEpoUkFBV1gZazL70fqLn8rcvBA@mail.gmail.com
* Fix handling of container types in find_composite_type_dependencies.Tom Lane2017-08-09
| | | | | | | | | | | | | | | | | | | | | | find_composite_type_dependencies correctly found columns that are of the specified type, and columns that are of arrays of that type, but not columns that are domains or ranges over the given type, its array type, etc. The most general way to handle this seems to be to assume that any type that is directly dependent on the specified type can be treated as a container type, and processed recursively (allowing us to handle nested cases such as ranges over domains over arrays ...). Since a type's array type already has such a dependency, we can drop the existing special case for the array type. The very similar logic in get_rels_with_domain was likewise a few bricks shy of a load, as it supposed that a directly dependent type could *only* be a sub-domain. This is already wrong for ranges over domains, and it'll someday be wrong for arrays over domains. Add test cases illustrating the problems, and back-patch to all supported branches. Discussion: https://postgr.es/m/15268.1502309024@sss.pgh.pa.us
* Reword some unclear commentsAlvaro Herrera2017-08-08
|
* Stamp 9.5.8.REL9_5_8Tom Lane2017-08-07
|