aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Update time zone data files to tzdata release 2016h.Tom Lane2016-10-20
| | | | | | | (Didn't I just do this? Oh well.) DST law changes in Palestine. Historical corrections for Turkey. Switch to numeric abbreviations for Asia/Colombo.
* Another portability fix for tzcode2016g update.Tom Lane2016-10-19
| | | | | clang points out that SIZE_MAX wouldn't fit into an int, which means this comparison is pretty useless. Per report from Thomas Munro.
* Windows portability fix.Tom Lane2016-10-19
| | | | Per buildfarm.
* Sync our copy of the timezone library with IANA release tzcode2016g.Tom Lane2016-10-19
| | | | | | This is mostly to absorb some corner-case fixes in zic for year-2037 timestamps. The other changes that have been made are unlikely to affect our usage, but nonetheless we may as well take 'em.
* Suppress "Factory" zone in pg_timezone_names view for tzdata >= 2016g.Tom Lane2016-10-19
| | | | | | IANA got rid of the really silly "abbreviation" and replaced it with one that's only moderately silly. But it's still pointless, so keep on not showing it.
* Update time zone data files to tzdata release 2016g.Tom Lane2016-10-19
| | | | | | | | | | | | | | | | | | | | | DST law changes in Turkey. Historical corrections for America/Los_Angeles, Europe/Kirov, Europe/Moscow, Europe/Samara, and Europe/Ulyanovsk. Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link. The IANA crew continue their campaign to replace invented time zone abbrevations with numeric GMT offsets. This update changes numerous zones in Antarctica and the former Soviet Union, for instance Antarctica/Casey now reports "+08" not "AWST" in the pg_timezone_names view. I kept these abbreviations in the tznames/ data files, however, so that we will still accept them for input. (We may want to start trimming those files someday, but today is not that day.) An exception is that since IANA no longer claims that "AMT" is in use in Armenia for GMT+4, I replaced it in the Default file with GMT-4, corresponding to Amazon Time which is in use in South America. It may be that that meaning is also invented and IANA will drop it in a future update; but for now, it seems silly to give pride of place to a meaning not traceable to IANA over one that is.
* Fix WAL-logging of FSM and VM truncation.Heikki Linnakangas2016-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | When a relation is truncated, it is important that the FSM is truncated as well. Otherwise, after recovery, the FSM can return a page that has been truncated away, leading to errors like: ERROR: could not read block 28991 in file "base/16390/572026": read only 0 of 8192 bytes We were using MarkBufferDirtyHint() to dirty the buffer holding the last remaining page of the FSM, but during recovery, that might in fact not dirty the page, and the FSM update might be lost. To fix, use the stronger MarkBufferDirty() function. MarkBufferDirty() requires us to do WAL-logging ourselves, to protect from a torn page, if checksumming is enabled. Also fix an oversight in visibilitymap_truncate: it also needs to WAL-log when checksumming is enabled. Analysis by Pavan Deolasee. Discussion: <CABOikdNr5vKucqyZH9s1Mh0XebLs_jRhKv6eJfNnD2wxTn=_9A@mail.gmail.com> Backpatch to 9.3, where we got data checksums.
* Fix cidin() to handle values above 2^31 platform-independently.Tom Lane2016-10-18
| | | | | | | | | | | | | | | | | | | | | | CommandId is declared as uint32, and values up to 4G are indeed legal. cidout() handles them properly by treating the value as unsigned int. But cidin() was just using atoi(), which has platform-dependent behavior for values outside the range of signed int, as reported by Bart Lengkeek in bug #14379. Use strtoul() instead, as xidin() does. In passing, make some purely cosmetic changes to make xidin/xidout look more like cidin/cidout; the former didn't have a monopoly on best practice IMO. Neither xidin nor cidin make any attempt to throw error for invalid input. I didn't change that here, and am not sure it's worth worrying about since neither is really a user-facing type. The point is just to ensure that indubitably-valid inputs work as expected. It's been like this for a long time, so back-patch to all supported branches. Report: <20161018152550.1413.6439@wrigleys.postgresql.org>
* Fix use-after-free around DISTINCT transition function calls.Heikki Linnakangas2016-10-17
| | | | | | | | | | | | | | Have tuplesort_gettupleslot() copy the contents of its current table slot as needed. This is based on an approach taken by tuplestore_gettupleslot(). In the future, tuplesort_gettupleslot() may also be taught to avoid copying the tuple where caller can determine that that is safe (the tuplestore_gettupleslot() interface already offers this option to callers). Patch by Peter Geoghegan. Fixes bug #14344, reported by Regina Obe. Report: <20160929035538.20224.39628@wrigleys.postgresql.org> Backpatch-through: 9.6
* Fix assorted integer-overflow hazards in varbit.c.Tom Lane2016-10-14
| | | | | | | | | | | | | | | | | bitshiftright() and bitshiftleft() would recursively call each other infinitely if the user passed INT_MIN for the shift amount, due to integer overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN. That doesn't change the results since any shift distance larger than the input bit string's length produces an all-zeroes result. Also fix some places that seemed inadequately paranoid about input typmods exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will certainly be much less than that, at least some of these spots are reachable with user-chosen integer values. Andreas Seltenreich and Tom Lane Discussion: <87d1j2zqtz.fsf@credativ.de>
* Fix handling of pgstat counters for TRUNCATE in a prepared transaction.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | | | | | | pgstat_twophase_postcommit is supposed to duplicate the math in AtEOXact_PgStat, but it had missed out the bit about clearing t_delta_live_tuples/t_delta_dead_tuples for a TRUNCATE. It's harder than you might think to replicate the issue here, because those counters would only be nonzero when a previous transaction in the same backend had added/deleted tuples in the truncated table, and those counts hadn't been sent to the stats collector yet. Evident oversight in commit d42358efb. I've not added a regression test for this; we tried to add one in d42358efb, and had to revert it because it was too timing-sensitive for the buildfarm. Back-patch to 9.5 where d42358efb came in. Stas Kelvich Discussion: <EB57BF68-C06D-4737-BDDC-4BA778F4E62B@postgrespro.ru>
* Fix another bug in merging of inherited CHECK constraints.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | It's not good for an inherited child constraint to be marked connoinherit; that would result in the constraint not propagating to grandchild tables, if any are created later. The code mostly prevented this from happening but there was one case that was missed. This is somewhat related to commit e55a946a8, which also tightened checks on constraint merging. Hence, back-patch to 9.2 like that one. This isn't so much because there's a concrete feature-related reason to stop there, as to avoid having more distinct behaviors than we have to in this area. Amit Langote Discussion: <b28ee774-7009-313d-dd55-5bdd81242c41@lab.ntt.co.jp>
* Try to find out the actual hugepage size when making a MAP_HUGETLB request.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | | | | | | | Even if Linux's mmap() is okay with a partial-hugepage request, munmap() is not, as reported by Chris Richards. Therefore it behooves us to try a bit harder to find out the actual hugepage size, instead of assuming that we can skate by with a guess. For the moment, just look into /proc/meminfo to find out the default hugepage size, and use that. Later, on kernels that support requests for nondefault sizes, we might try to consider other alternatives. But that smells more like a new feature than a bug fix, especially if we want to provide any way for the DBA to control it, so leave it for another day. I set this up to allow easy addition of platform-specific code for non-Linux platforms, if needed; but right now there are no reports suggesting that we need to work harder on other platforms. Back-patch to 9.4 where hugepage support was introduced. Discussion: <31056.1476303954@sss.pgh.pa.us>
* Clean up handling of anonymous mmap'd shared-memory segment.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix detaching of the mmap'd segment to have its own on_shmem_exit callback, rather than piggybacking on the one for detaching from the SysV segment. That was confusing, and given the distance between the two attach calls, it was trouble waiting to happen. Make the detaching calls idempotent by clearing AnonymousShmem to show we've already unmapped. I spent quite a bit of time yesterday trying to find a path that would allow the munmap()'s to be done twice, and while I did not succeed, it seems silly that there's even a question. Make the #ifdef logic less confusing by separating "do we want to use anonymous shmem" from EXEC_BACKEND. Even though there's no current scenario where those conditions are different, it is not helpful for different places in the same file to be testing EXEC_BACKEND for what are fundamentally different reasons. Don't do on_exit_reset() in StartBackgroundWorker(). At best that's useless (InitPostmasterChild would have done it already) and at worst it could zap some callback that's unrelated to shared memory. Improve comments, and simplify the huge_pages enablement logic slightly. Back-patch to 9.4 where hugepage support was introduced. Arguably this should go into 9.3 as well, but the code looks significantly different there, and I doubt it's worth the trouble of adapting the patch given I can't show a live bug.
* Fix broken jsonb_set() logic for replacing array elements.Tom Lane2016-10-13
| | | | | | | | | | | Commit 0b62fd036 did a fairly sloppy job of refactoring setPath() to support jsonb_insert() along with jsonb_set(). In its defense, though, there was no regression test case exercising the case of replacing an existing element in a jsonb array. Per bug #14366 from Peng Sun. Back-patch to 9.6 where bug was introduced. Report: <20161012065349.1412.47858@wrigleys.postgresql.org>
* Revert addition of PGDLLEXPORT in PG_FUNCTION_INFO_V1 macro.Tom Lane2016-10-12
| | | | | | | | | | | | | | | | | | This turns out not to be as harmless as I thought: MSVC will complain if it sees an "extern" declaration without PGDLLEXPORT and then one with. (Seems fairly silly, given that this can be changed after the fact by the linker, but there you have it.) Therefore, contrib modules that have extern's for V1 functions in header files are falling over in the buildfarm, since none of those externs are marked PGDLLEXPORT. We might or might not conclude that we're willing to plaster those declarations with PGDLLEXPORT in HEAD, but in any case there's no way we're going to ship this change in the back branches. Third-party authors would not thank us for breaking their code in a minor release. Hence, revert the addition of PGDLLEXPORT (but let's keep the extra info in the comment). If we do the other changes we can revert this commit in HEAD. Per buildfarm.
* Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro.Tom Lane2016-10-12
| | | | | | | | | | | | | | | | This isn't really necessary for our own code, because we use a .DEF file in MSVC builds (see gendef.pl), or --export-all-symbols in MinGW and Cygwin builds, to ensure that all global symbols in loadable modules will be exported on Windows. However, third-party authors might use different build processes that need this marker, and it's harmless enough for our own builds. To some extent, this is an oversight in commit e7128e8db, so back-patch to 9.4 where that was added. Laurenz Albe Discussion: <A737B7A37273E048B164557ADEF4A58B539300BD@ntex2010a.host.magwien.gv.at>
* Fix copy-pasto in comment.Heikki Linnakangas2016-10-12
| | | | Amit Langote
* In PQsendQueryStart(), avoid leaking any left-over async result.Tom Lane2016-10-10
| | | | | | | | | | | Ordinarily there would not be an async result sitting around at this point, but it appears that in corner cases there can be. Considering all the work we're about to launch, it's hardly going to cost anything noticeable to check. It's been like this forever, so back-patch to all supported branches. Report: <CAD-Qf1eLUtBOTPXyFQGW-4eEsop31tVVdZPu4kL9pbQ6tJPO8g@mail.gmail.com>
* Fix incorrect handling of polymorphic aggregates used as window functions.Tom Lane2016-10-09
| | | | | | | | | | | | | | | | | The transfunction was told that its first argument and result were of the window function output type, not the aggregate state type. This'd only matter if the transfunction consults get_fn_expr_argtype, which typically only polymorphic functions would do. Although we have several regression tests around polymorphic aggs, none of them detected this mistake --- in fact, they still didn't fail when I injected the same mistake into nodeAgg.c. So add some more tests covering both plain agg and window-function-agg cases. Per report from Sebastian Luque. Back-patch to 9.6 where the error was introduced (by sloppy refactoring in commit 804163bc2, looks like). Report: <87int2qkat.fsf@gmail.com>
* Fix two bugs in merging of inherited CHECK constraints.Tom Lane2016-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically, we've allowed users to add a CHECK constraint to a child table and then add an identical CHECK constraint to the parent. This results in "merging" the two constraints so that the pre-existing child constraint ends up with both conislocal = true and coninhcount > 0. However, if you tried to do it in the other order, you got a duplicate constraint error. This is problematic for pg_dump, which needs to issue separated ADD CONSTRAINT commands in some cases, but has no good way to ensure that the constraints will be added in the required order. And it's more than a bit arbitrary, too. The goal of complaining about duplicated ADD CONSTRAINT commands can be served if we reject the case of adding a constraint when the existing one already has conislocal = true; but if it has conislocal = false, let's just make the ADD CONSTRAINT set conislocal = true. In this way, either order of adding the constraints has the same end result. Another problem was that the code allowed creation of a parent constraint marked convalidated that is merged with a child constraint that is !convalidated. In this case, an inheritance scan of the parent table could emit some rows violating the constraint condition, which would be an unexpected result given the marking of the parent constraint as validated. Hence, forbid merging of constraints in this case. (Note: valid child and not-valid parent seems fine, so continue to allow that.) Per report from Benedikt Grundmann. Back-patch to 9.2 where we introduced possibly-not-valid check constraints. The second bug obviously doesn't apply before that, and I think the first doesn't either, because pg_dump only gets into this situation when dealing with not-valid constraints. Report: <CADbMkNPT-Jz5PRSQ4RbUASYAjocV_KHUWapR%2Bg8fNvhUAyRpxA%40mail.gmail.com> Discussion: <22108.1475874586@sss.pgh.pa.us>
* Remove user_relns() SRF from regression tests.Tom Lane2016-10-08
| | | | | | | | | Back-patch commit 0dba54f1666ead71c54ce100b39efda67596d297 into the older branches. This test is almost as much of a patching hazard there as it is in HEAD, and it has no more reason to be needed than it does in HEAD. I went back as far as 9.2; I judged 9.1 not worth the trouble since it's on the verge of being EOL'd.
* libpqwalreceiver needs to link with libintl when using --enable-nls.Tom Lane2016-10-07
| | | | | | | | | | | | | | | | The need for this was previously obscured even on picky platforms by the hack we used to support direct cross-module references in the transforms contrib modules. Now that that hack is gone, the undefined symbol is exposed, as reported by Robert Haas. Back-patch to 9.5 where we started to use -Wl,-undefined,dynamic_lookup. I'm a bit surprised that the older branches don't seem to contain any gettext references in this module, but since they don't fail at build time, they must not. (We might be able to get away with leaving this alone in 9.5/9.6, but I think it's cleaner if the reference gets resolved at link time.) Report: <CA+TgmoaHJKU5kcWZcYduATYVT7Mnx+8jUnycaYYL7OtCwCigug@mail.gmail.com>
* Fix fallback implementation of pg_atomic_write_u32().Andres Freund2016-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I somehow had assumed that in the spinlock (in turn possibly using semaphores) based fallback atomics implementation 32 bit writes could be done without a lock. As far as the write goes that's correct, since postgres supports only platforms with single-copy atomicity for aligned 32bit writes. But writing without holding the spinlock breaks read-modify-write operations like pg_atomic_compare_exchange_u32(), since they'll potentially "miss" a concurrent write, which can't happen in actual hardware implementations. In 9.6+ when using the fallback atomics implementation this could lead to buffer header locks not being properly marked as released, and potentially some related state corruption. I don't see a related danger in 9.5 (earliest release with the API), because pg_atomic_write_u32() wasn't used in a concurrent manner there. The state variable of local buffers, before this change, were manipulated using pg_atomic_write_u32(), to avoid unnecessary synchronization overhead. As that'd not be the case anymore, introduce and use pg_atomic_unlocked_write_u32(), which does not correctly interact with RMW operations. This bug only caused issues when postgres is compiled on platforms without atomics support (i.e. no common new platform), or when compiled with --disable-atomics, which explains why this wasn't noticed in testing. Reported-By: Tom Lane Discussion: <14947.1475690465@sss.pgh.pa.us> Backpatch: 9.5-, where the atomic operations API was introduced.
* Make TAP test suites to work, when @INC does not contain current dir.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | Recent Perl and/or new Linux distributions are starting to remove "." from the @INC list by default. That breaks pg_rewind and ssl test suites, which use helper perl modules that reside in the same directory. To fix, add the current source directory explicitly to prove's include dir. The vcregress.pl script probably also needs something like this, but I wasn't able to remove '.' from @INC on Windows to test this, and don't want to try doing that blindly. Discussion: <20160908204529.flg6nivjuwp5vaoy@alap3.anarazel.de>
* Fix pg_dump to work against pre-9.0 servers again.Tom Lane2016-10-07
| | | | | | | | | | | | | | getBlobs' queries for pre-9.0 servers were broken in two ways: the 7.x/8.x query uses DISTINCT so it can't have unspecified-type NULLs in the target list, and both that query and the 7.0 one failed to provide the correct output column labels, so that the subsequent code to extract data from the PGresult would fail. Back-patch to 9.6 where the breakage was introduced (by commit 23f34fa4b). Amit Langote and Tom Lane Discussion: <0a3e7a0e-37bd-8427-29bd-958135862f0a@lab.ntt.co.jp>
* Don't allow both --source-server and --source-target args to pg_rewind.Heikki Linnakangas2016-10-07
| | | | | | | | | They are supposed to be mutually exclusive, but there was no check for that. Michael Banck Discussion: <20161007103414.GD12247@nighthawk.caipicrew.dd-dns.de>
* Clear OpenSSL error queue after failed X509_STORE_load_locations() call.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | Leaving the error in the error queue used to be harmless, because the X509_STORE_load_locations() call used to be the last step in initialize_SSL(), and we would clear the queue before the next SSL_connect() call. But previous commit moved things around. The symptom was that if a CRL file was not found, and one of the subsequent initialization steps, like loading the client certificate or private key, failed, we would incorrectly print the "no such file" error message from the earlier X509_STORE_load_locations() call as the reason. Backpatch to all supported versions, like the previous patch.
* Don't share SSL_CTX between libpq connections.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were several issues with the old coding: 1. There was a race condition, if two threads opened a connection at the same time. We used a mutex around SSL_CTX_* calls, but that was not enough, e.g. if one thread SSL_CTX_load_verify_locations() with one path, and another thread set it with a different path, before the first thread got to establish the connection. 2. Opening two different connections, with different sslrootcert settings, seemed to fail outright with "SSL error: block type is not 01". Not sure why. 3. We created the SSL object, before calling SSL_CTX_load_verify_locations and SSL_CTX_use_certificate_chain_file on the SSL context. That was wrong, because the options set on the SSL context are propagated to the SSL object, when the SSL object is created. If they are set after the SSL object has already been created, they won't take effect until the next connection. (This is bug #14329) At least some of these could've been fixed while still using a shared context, but it would've been more complicated and error-prone. To keep things simple, let's just use a separate SSL context for each connection, and accept the overhead. Backpatch to all supported versions. Report, analysis and test case by Kacper Zuk. Discussion: <20160920101051.1355.79453@wrigleys.postgresql.org>
* Disable synchronous commits in pg_rewind.Heikki Linnakangas2016-10-06
| | | | | | | | | | | | | | | | | | If you point pg_rewind to a server that is using synchronous replication, with "pg_rewind --source-server=...", and the replication is not working for some reason, pg_rewind will get stuck because it creates a temporary table, which needs to be replicated. You could call broken replication a pilot error, but pg_rewind is often used in special circumstances, when there are changes to the replication setup. We don't do any "real" updates, and we don't care about fsyncing or replicating the operations on the temporary tables, so fix that by setting synchronous_commit off. Michael Banck, Michael Paquier. Backpatch to 9.5, where pg_rewind was introduced. Discussion: <20161005143938.GA12247@nighthawk.caipicrew.dd-dns.de>
* Update obsolete comments and perldoc.Robert Haas2016-10-05
| | | | | | Loose ends from commit 2a0f89cd717ce6d49cdc47850577823682167e87. Daniel Gustafsson
* Correct logical decoding restore behaviour for subtransactions.Andres Freund2016-10-03
| | | | | | | | | | | | | | | | | | | Before initializing iteration over a subtransaction's changes, the last few changes were not spilled to disk. That's correct if the transaction didn't spill to disk, but otherwise... This bug can lead to missed or misorderd subtransaction contents when they were spilled to disk. Move spilling of the remaining in-memory changes to ReorderBufferIterTXNInit(), where it can easily be applied to the top transaction and, if present, subtransactions. Since this code had too many bugs already, noticeably increase test coverage. Fixes: #14319 Reported-By: Huan Ruan Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org> Backport: 9,4-, where logical decoding was added
* Show a sensible value in pg_settings.unit for GUC_UNIT_XSEGS variables.Tom Lane2016-10-03
| | | | | | | | | | | | | | | | Commit 88e982302 invented GUC_UNIT_XSEGS for min_wal_size and max_wal_size, but neglected to make it display sensibly in pg_settings.unit (by adding a case to the switch in GetConfigOptionByNum). Fix that, and adjust said switch to throw a run-time error the next time somebody forgets. In passing, avoid using a static buffer for the output string --- the rest of this function pstrdup's from a local buffer, and I see no very good reason why the units code should do it differently and less safely. Per report from Otar Shavadze. Back-patch to 9.5 where the new unit type was added. Report: <CAG-jOyA=iNFhN+yB4vfvqh688B7Tr5SArbYcFUAjZi=0Exp-Lg@mail.gmail.com>
* Fix RLS with COPY (col1, col2) FROM tabStephen Frost2016-10-03
| | | | | | | | | | | | | | | | Attempting to COPY a subset of columns from a table with RLS enabled would fail due to an invalid query being constructed (using a single ColumnRef with the list of fields to exact in 'fields', but that's for the different levels of an indirection for a single column, not for specifying multiple columns). Correct by building a ColumnRef and then RestTarget for each column being requested and then adding those to the targetList for the select query. Include regression tests to hopefully catch if this is broken again in the future. Patch-By: Adam Brightwell Reviewed-By: Michael Paquier
* Enforce a specific order for probing library loadability in pg_upgrade.Tom Lane2016-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | pg_upgrade checks whether all the shared libraries used in the old cluster are also available in the new one by issuing LOAD for each library name. Previously, it cared not what order it did the LOADs in. Ideally it should not have to care, but currently the transform modules in contrib fail unless both the language and datatype modules they depend on are loaded first. A backend-side solution for that looks possible but probably not back-patchable, so as a stopgap measure, let's do the LOAD tests in order by library name length. That should fix the problem for reasonably-named transform modules, eg "hstore_plpython" will be loaded after both "hstore" and "plpython". (Yeah, it's a hack.) In a larger sense, having a predictable order of these probes is a good thing, since it will make upgrades predictably work or not work in the face of inter-library dependencies. Also, this patch replaces O(N^2) de-duplication logic with O(N log N) logic, which could matter in installations with very many databases. So I don't foresee reverting this even after we have a proper fix for the library-dependency problem. In passing, improve a couple of SQL queries used here. Per complaint from Andrew Dunstan that pg_upgrade'ing the transform contrib modules failed. Back-patch to 9.5 where transform modules were introduced. Discussion: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
* Add ALTER EXTENSION ADD/DROP ACCESS METHOD, and use it in pg_upgrade.Tom Lane2016-10-02
| | | | | | | | | | Without this, an extension containing an access method is not properly dumped/restored during pg_upgrade --- the AM ends up not being a member of the extension after upgrading. Another oversight in commit 473b93287, reported by Andrew Dunstan. Report: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
* Do ClosePostmasterPorts() earlier in SubPostmasterMain().Tom Lane2016-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | In standard Unix builds, postmaster child processes do ClosePostmasterPorts immediately after InitPostmasterChild, that is almost immediately after being spawned. This is important because we don't want children holding open the postmaster's end of the postmaster death watch pipe. However, in EXEC_BACKEND builds, SubPostmasterMain was postponing this responsibility significantly, in order to make it slightly more convenient to pass the right flag value to ClosePostmasterPorts. This is bad, particularly seeing that process_shared_preload_libraries() might invoke nearly-arbitrary code. Rearrange so that we do it as soon as we've fetched the socket FDs via read_backend_variables(). Also move the comment explaining about randomize_va_space to before the call of PGSharedMemoryReAttach, which is where it's relevant. The old placement was appropriate when the reattach happened inside CreateSharedMemoryAndSemaphores, but that was a long time ago. Back-patch to 9.3; the patch doesn't apply cleanly before that, and it doesn't seem worth a lot of effort given that we've had no actual field complaints traceable to this. Discussion: <4157.1475178360@sss.pgh.pa.us>
* Fix misstatement in comment in Makefile.shlib.Tom Lane2016-10-01
| | | | | | | | | There is no need for "all: all-lib" to be placed before inclusion of Makefile.shlib. Makefile.global is what ensures that "all" is the default target, and we already document that that has to be included first. Per comment from Pavel Raiskup. Discussion: <1925924.izSMJEZO3x@unused-4-107.brq.redhat.com>
* Fix misplacement of submake-generated-headers prerequisites.Tom Lane2016-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | The sequence "configure; cd src/pl/plpython; make -j" failed due to trying to compile plpython's .o files before the generated headers finished building. (This is an important real-world case, since it's the typical second step when building both plpython2 and plpython3.) This happens because the submake-generated-headers target is not placed in a way to make it a prerequisite to compiling the .o files. Fix that. Checking other uses of submake-generated-headers, I noted that the one attached to pg_regress was similarly misplaced; but it's actually not needed at all for pg_regress.o, rather regress.o, so move it to be a prerequisite of that. Back-patch to 9.6 where submake-generated-headers was introduced (by commit 548af97fc). It's not immediately clear to me why the previous coding didn't have the same issue; but since we've not had field reports of plpython make failing, leave it alone in the older branches. Pavel Raiskup and Tom Lane Discussion: <1925924.izSMJEZO3x@unused-4-107.brq.redhat.com>
* Improve error reporting in pg_upgrade's file copying/linking/rewriting.Tom Lane2016-09-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous design for this had copyFile(), linkFile(), and rewriteVisibilityMap() returning strerror strings, with the caller producing one-size-fits-all error messages based on that. This made it impossible to produce messages that described the failures with any degree of precision, especially not short-read problems since those don't set errno at all. Since pg_upgrade has no intention of continuing after any error in this area, let's fix this by just letting these functions call pg_fatal() for themselves, making it easy for each point of failure to have a suitable error message. Taking this approach also allows dropping cleanup code that was unnecessary and was often rather sloppy about preserving errno. To not lose relevant info that was reported before, pass in the schema name and table name of the current table so that they can be included in the error reports. An additional problem was the use of getErrorText(), which was flat out wrong for all but a couple of call sites, because it unconditionally did "_dosmaperr(GetLastError())" on Windows. That's only appropriate when reporting an error from a Windows-native API, which only a couple of the callers were actually doing. Thus, even the reported strerror string would be unrelated to the actual failure in many cases on Windows. To fix, get rid of getErrorText() altogether, and just have call sites do strerror(errno) instead, since that's the way all the rest of our frontend programs do it. Add back the _dosmaperr() calls in the two places where that's actually appropriate. In passing, make assorted messages hew more closely to project style guidelines, notably by removing initial capitals in not-complete-sentence primary error messages. (I didn't make any effort to clean up places I didn't have another reason to touch, though.) Per discussion of a report from Thomas Kellerer. Back-patch to 9.6, but no further; given the relative infrequency of reports of problems here, it's not clear it's worth adapting the patch to older branches. Patch by me, but with credit to Alvaro Herrera for spotting the issue with getErrorText's misuse of _dosmaperr(). Discussion: <nsjrbh$8li$1@blaine.gmane.org>
* Fix multiple portability issues in pg_upgrade's rewriteVisibilityMap().Tom Lane2016-09-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is new code in 9.6, and evidently we missed out testing it as thoroughly as it should have been. Bugs fixed here: 1. Use binary not text mode to open the files on Windows. Before, if the visibility map chanced to contain two bytes that looked like \r\n, Windows' read() would convert that to \n, which both corrupts the map data and causes the file to look shorter than it should. Unless you were *very* unlucky and had an exact multiple of 8K such occurrences in each VM file, this would cause pg_upgrade to report a failure, though with a rather obscure error message. 2. The code for copying rebuilt bytes into the output was simply wrong. It chanced to work okay on little-endian machines but would emit the bytes in the wrong order on big-endian, leading to silent corruption of the visibility map data. 3. The code was careless about alignment of the working buffers. Given all three of an alignment-picky architecture, a compiler that chooses to put the new_vmbuf[] local variable at an odd starting address, and a checksum-enabled database, pg_upgrade would dump core. Point one was reported by Thomas Kellerer, the other two detected by code-reading. Point two is much the nastiest of these issues from an impact standpoint, though fortunately it affects only a minority of users. The Windows issue will definitely bite people, but it seems quite unlikely that there would be undetected corruption from that. In addition, I failed to resist the temptation to do some minor cosmetic adjustments, mostly improving the comments. It would be a good idea to try to improve the error reporting here, but that seems like material for a separate patch. Discussion: <nsjrbh$8li$1@blaine.gmane.org>
* Retry opening new segments in pg_xlogdump --folllowMagnus Hagander2016-09-30
| | | | | | There is a small window between when the server closes out the existing segment and the new one is created. Put a loop around the open call in this case to make sure we wait for the new file to actually appear.
* Silence compiler warningsAlvaro Herrera2016-09-28
| | | | Reported by Peter Eisentraut. Coding suggested by Tom Lane.
* worker_spi: Call pgstat_report_stat.Robert Haas2016-09-28
| | | | | | | Without this, statistics changes accumulated by the worker never get reported to the stats collector, which is bad. Julien Rouhaud
* Fix dangling pointer problem in ReorderBufferSerializeChange.Robert Haas2016-09-28
| | | | | | | | | Commit 3fe3511d05127cc024b221040db2eeb352e7d716 introduced a new case into this function, but neglected to ensure that the "ondisk" pointer got updated after a possible reallocation as the code does in other cases. Stas Kelvich, per diagnosis by Konstantin Knizhnik.
* Include <sys/select.h> where neededAlvaro Herrera2016-09-27
| | | | | | | | | | | | <sys/select.h> is required by POSIX.1-2001 to get the prototype of select(2), but nearly no systems enforce that because older standards let you get away with including some other headers. Recent OpenBSD hacking has removed that frail touch of friendliness, however, which broke some compiles; fix all the way back to 9.1 by adding the required standard. Only vacuumdb.c was reported to fail, but it seems easier to fix the whole lot in a fell swoop. Per bug #14334 by Sean Farrell.
* Stamp 9.6.0.REL9_6_0Tom Lane2016-09-26
|
* Translation updatesPeter Eisentraut2016-09-26
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 5c283d709ce8368fe710f90429b72048ac4c6349
* Install TAP test infrastructure so it's available for extension testing.Tom Lane2016-09-23
| | | | | | | | | | | | | When configured with --enable-tap-tests, "make install" will now install the Perl support files for TAP testing where PGXS will find them. This allows extensions to rely on $(prove_check) even when being built out-of-tree. Back-patch to 9.4 where we first started to support TAP testing, to reduce the number of cases extension makefiles need to consider. Craig Ringer Discussion: <CAMsr+YFXv+2qne6xJW7z_25mYBtktRX5rpkrgrb+DRgQ_FxgHQ@mail.gmail.com>
* Fix incorrect logic for excluding range constructor functions in pg_dump.Tom Lane2016-09-23
| | | | | | | | | | | | | | Faulty AND/OR nesting in the WHERE clause of getFuncs' SQL query led to dumping range constructor functions if they are part of an extension and we're in binary-upgrade mode. Actually, we don't want to dump them separately even then, since CREATE TYPE AS RANGE will create the range's constructor functions regardless. Per report from Andrew Dunstan. It looks like this mistake was introduced by me, in commit b985d4877, in perhaps-overzealous refactoring to reduce code duplication. I'm suitably embarrassed. Report: <34854939-02d7-f591-5677-ce2994104599@dunslane.net>