aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix cidin() to handle values above 2^31 platform-independently.Tom Lane2016-10-18
| | | | | | | | | | | | | | | | | | | | | | CommandId is declared as uint32, and values up to 4G are indeed legal. cidout() handles them properly by treating the value as unsigned int. But cidin() was just using atoi(), which has platform-dependent behavior for values outside the range of signed int, as reported by Bart Lengkeek in bug #14379. Use strtoul() instead, as xidin() does. In passing, make some purely cosmetic changes to make xidin/xidout look more like cidin/cidout; the former didn't have a monopoly on best practice IMO. Neither xidin nor cidin make any attempt to throw error for invalid input. I didn't change that here, and am not sure it's worth worrying about since neither is really a user-facing type. The point is just to ensure that indubitably-valid inputs work as expected. It's been like this for a long time, so back-patch to all supported branches. Report: <20161018152550.1413.6439@wrigleys.postgresql.org>
* Fix assorted integer-overflow hazards in varbit.c.Tom Lane2016-10-14
| | | | | | | | | | | | | | | | | bitshiftright() and bitshiftleft() would recursively call each other infinitely if the user passed INT_MIN for the shift amount, due to integer overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN. That doesn't change the results since any shift distance larger than the input bit string's length produces an all-zeroes result. Also fix some places that seemed inadequately paranoid about input typmods exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will certainly be much less than that, at least some of these spots are reachable with user-chosen integer values. Andreas Seltenreich and Tom Lane Discussion: <87d1j2zqtz.fsf@credativ.de>
* Fix handling of pgstat counters for TRUNCATE in a prepared transaction.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | | | | | | pgstat_twophase_postcommit is supposed to duplicate the math in AtEOXact_PgStat, but it had missed out the bit about clearing t_delta_live_tuples/t_delta_dead_tuples for a TRUNCATE. It's harder than you might think to replicate the issue here, because those counters would only be nonzero when a previous transaction in the same backend had added/deleted tuples in the truncated table, and those counts hadn't been sent to the stats collector yet. Evident oversight in commit d42358efb. I've not added a regression test for this; we tried to add one in d42358efb, and had to revert it because it was too timing-sensitive for the buildfarm. Back-patch to 9.5 where d42358efb came in. Stas Kelvich Discussion: <EB57BF68-C06D-4737-BDDC-4BA778F4E62B@postgrespro.ru>
* Fix another bug in merging of inherited CHECK constraints.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | It's not good for an inherited child constraint to be marked connoinherit; that would result in the constraint not propagating to grandchild tables, if any are created later. The code mostly prevented this from happening but there was one case that was missed. This is somewhat related to commit e55a946a8, which also tightened checks on constraint merging. Hence, back-patch to 9.2 like that one. This isn't so much because there's a concrete feature-related reason to stop there, as to avoid having more distinct behaviors than we have to in this area. Amit Langote Discussion: <b28ee774-7009-313d-dd55-5bdd81242c41@lab.ntt.co.jp>
* Try to find out the actual hugepage size when making a MAP_HUGETLB request.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | | | | | | | Even if Linux's mmap() is okay with a partial-hugepage request, munmap() is not, as reported by Chris Richards. Therefore it behooves us to try a bit harder to find out the actual hugepage size, instead of assuming that we can skate by with a guess. For the moment, just look into /proc/meminfo to find out the default hugepage size, and use that. Later, on kernels that support requests for nondefault sizes, we might try to consider other alternatives. But that smells more like a new feature than a bug fix, especially if we want to provide any way for the DBA to control it, so leave it for another day. I set this up to allow easy addition of platform-specific code for non-Linux platforms, if needed; but right now there are no reports suggesting that we need to work harder on other platforms. Back-patch to 9.4 where hugepage support was introduced. Discussion: <31056.1476303954@sss.pgh.pa.us>
* Clean up handling of anonymous mmap'd shared-memory segment.Tom Lane2016-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix detaching of the mmap'd segment to have its own on_shmem_exit callback, rather than piggybacking on the one for detaching from the SysV segment. That was confusing, and given the distance between the two attach calls, it was trouble waiting to happen. Make the detaching calls idempotent by clearing AnonymousShmem to show we've already unmapped. I spent quite a bit of time yesterday trying to find a path that would allow the munmap()'s to be done twice, and while I did not succeed, it seems silly that there's even a question. Make the #ifdef logic less confusing by separating "do we want to use anonymous shmem" from EXEC_BACKEND. Even though there's no current scenario where those conditions are different, it is not helpful for different places in the same file to be testing EXEC_BACKEND for what are fundamentally different reasons. Don't do on_exit_reset() in StartBackgroundWorker(). At best that's useless (InitPostmasterChild would have done it already) and at worst it could zap some callback that's unrelated to shared memory. Improve comments, and simplify the huge_pages enablement logic slightly. Back-patch to 9.4 where hugepage support was introduced. Arguably this should go into 9.3 as well, but the code looks significantly different there, and I doubt it's worth the trouble of adapting the patch given I can't show a live bug.
* Revert addition of PGDLLEXPORT in PG_FUNCTION_INFO_V1 macro.Tom Lane2016-10-12
| | | | | | | | | | | | | | | | | | This turns out not to be as harmless as I thought: MSVC will complain if it sees an "extern" declaration without PGDLLEXPORT and then one with. (Seems fairly silly, given that this can be changed after the fact by the linker, but there you have it.) Therefore, contrib modules that have extern's for V1 functions in header files are falling over in the buildfarm, since none of those externs are marked PGDLLEXPORT. We might or might not conclude that we're willing to plaster those declarations with PGDLLEXPORT in HEAD, but in any case there's no way we're going to ship this change in the back branches. Third-party authors would not thank us for breaking their code in a minor release. Hence, revert the addition of PGDLLEXPORT (but let's keep the extra info in the comment). If we do the other changes we can revert this commit in HEAD. Per buildfarm.
* Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro.Tom Lane2016-10-12
| | | | | | | | | | | | | | | | This isn't really necessary for our own code, because we use a .DEF file in MSVC builds (see gendef.pl), or --export-all-symbols in MinGW and Cygwin builds, to ensure that all global symbols in loadable modules will be exported on Windows. However, third-party authors might use different build processes that need this marker, and it's harmless enough for our own builds. To some extent, this is an oversight in commit e7128e8db, so back-patch to 9.4 where that was added. Laurenz Albe Discussion: <A737B7A37273E048B164557ADEF4A58B539300BD@ntex2010a.host.magwien.gv.at>
* Fix copy-pasto in comment.Heikki Linnakangas2016-10-12
| | | | Amit Langote
* In PQsendQueryStart(), avoid leaking any left-over async result.Tom Lane2016-10-10
| | | | | | | | | | | Ordinarily there would not be an async result sitting around at this point, but it appears that in corner cases there can be. Considering all the work we're about to launch, it's hardly going to cost anything noticeable to check. It's been like this forever, so back-patch to all supported branches. Report: <CAD-Qf1eLUtBOTPXyFQGW-4eEsop31tVVdZPu4kL9pbQ6tJPO8g@mail.gmail.com>
* Fix two bugs in merging of inherited CHECK constraints.Tom Lane2016-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically, we've allowed users to add a CHECK constraint to a child table and then add an identical CHECK constraint to the parent. This results in "merging" the two constraints so that the pre-existing child constraint ends up with both conislocal = true and coninhcount > 0. However, if you tried to do it in the other order, you got a duplicate constraint error. This is problematic for pg_dump, which needs to issue separated ADD CONSTRAINT commands in some cases, but has no good way to ensure that the constraints will be added in the required order. And it's more than a bit arbitrary, too. The goal of complaining about duplicated ADD CONSTRAINT commands can be served if we reject the case of adding a constraint when the existing one already has conislocal = true; but if it has conislocal = false, let's just make the ADD CONSTRAINT set conislocal = true. In this way, either order of adding the constraints has the same end result. Another problem was that the code allowed creation of a parent constraint marked convalidated that is merged with a child constraint that is !convalidated. In this case, an inheritance scan of the parent table could emit some rows violating the constraint condition, which would be an unexpected result given the marking of the parent constraint as validated. Hence, forbid merging of constraints in this case. (Note: valid child and not-valid parent seems fine, so continue to allow that.) Per report from Benedikt Grundmann. Back-patch to 9.2 where we introduced possibly-not-valid check constraints. The second bug obviously doesn't apply before that, and I think the first doesn't either, because pg_dump only gets into this situation when dealing with not-valid constraints. Report: <CADbMkNPT-Jz5PRSQ4RbUASYAjocV_KHUWapR%2Bg8fNvhUAyRpxA%40mail.gmail.com> Discussion: <22108.1475874586@sss.pgh.pa.us>
* Remove user_relns() SRF from regression tests.Tom Lane2016-10-08
| | | | | | | | | Back-patch commit 0dba54f1666ead71c54ce100b39efda67596d297 into the older branches. This test is almost as much of a patching hazard there as it is in HEAD, and it has no more reason to be needed than it does in HEAD. I went back as far as 9.2; I judged 9.1 not worth the trouble since it's on the verge of being EOL'd.
* libpqwalreceiver needs to link with libintl when using --enable-nls.Tom Lane2016-10-07
| | | | | | | | | | | | | | | | The need for this was previously obscured even on picky platforms by the hack we used to support direct cross-module references in the transforms contrib modules. Now that that hack is gone, the undefined symbol is exposed, as reported by Robert Haas. Back-patch to 9.5 where we started to use -Wl,-undefined,dynamic_lookup. I'm a bit surprised that the older branches don't seem to contain any gettext references in this module, but since they don't fail at build time, they must not. (We might be able to get away with leaving this alone in 9.5/9.6, but I think it's cleaner if the reference gets resolved at link time.) Report: <CA+TgmoaHJKU5kcWZcYduATYVT7Mnx+8jUnycaYYL7OtCwCigug@mail.gmail.com>
* Fix fallback implementation of pg_atomic_write_u32().Andres Freund2016-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I somehow had assumed that in the spinlock (in turn possibly using semaphores) based fallback atomics implementation 32 bit writes could be done without a lock. As far as the write goes that's correct, since postgres supports only platforms with single-copy atomicity for aligned 32bit writes. But writing without holding the spinlock breaks read-modify-write operations like pg_atomic_compare_exchange_u32(), since they'll potentially "miss" a concurrent write, which can't happen in actual hardware implementations. In 9.6+ when using the fallback atomics implementation this could lead to buffer header locks not being properly marked as released, and potentially some related state corruption. I don't see a related danger in 9.5 (earliest release with the API), because pg_atomic_write_u32() wasn't used in a concurrent manner there. The state variable of local buffers, before this change, were manipulated using pg_atomic_write_u32(), to avoid unnecessary synchronization overhead. As that'd not be the case anymore, introduce and use pg_atomic_unlocked_write_u32(), which does not correctly interact with RMW operations. This bug only caused issues when postgres is compiled on platforms without atomics support (i.e. no common new platform), or when compiled with --disable-atomics, which explains why this wasn't noticed in testing. Reported-By: Tom Lane Discussion: <14947.1475690465@sss.pgh.pa.us> Backpatch: 9.5-, where the atomic operations API was introduced.
* Make TAP test suites to work, when @INC does not contain current dir.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | Recent Perl and/or new Linux distributions are starting to remove "." from the @INC list by default. That breaks pg_rewind and ssl test suites, which use helper perl modules that reside in the same directory. To fix, add the current source directory explicitly to prove's include dir. The vcregress.pl script probably also needs something like this, but I wasn't able to remove '.' from @INC on Windows to test this, and don't want to try doing that blindly. Discussion: <20160908204529.flg6nivjuwp5vaoy@alap3.anarazel.de>
* Don't allow both --source-server and --source-target args to pg_rewind.Heikki Linnakangas2016-10-07
| | | | | | | | | They are supposed to be mutually exclusive, but there was no check for that. Michael Banck Discussion: <20161007103414.GD12247@nighthawk.caipicrew.dd-dns.de>
* Clear OpenSSL error queue after failed X509_STORE_load_locations() call.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | Leaving the error in the error queue used to be harmless, because the X509_STORE_load_locations() call used to be the last step in initialize_SSL(), and we would clear the queue before the next SSL_connect() call. But previous commit moved things around. The symptom was that if a CRL file was not found, and one of the subsequent initialization steps, like loading the client certificate or private key, failed, we would incorrectly print the "no such file" error message from the earlier X509_STORE_load_locations() call as the reason. Backpatch to all supported versions, like the previous patch.
* Don't share SSL_CTX between libpq connections.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were several issues with the old coding: 1. There was a race condition, if two threads opened a connection at the same time. We used a mutex around SSL_CTX_* calls, but that was not enough, e.g. if one thread SSL_CTX_load_verify_locations() with one path, and another thread set it with a different path, before the first thread got to establish the connection. 2. Opening two different connections, with different sslrootcert settings, seemed to fail outright with "SSL error: block type is not 01". Not sure why. 3. We created the SSL object, before calling SSL_CTX_load_verify_locations and SSL_CTX_use_certificate_chain_file on the SSL context. That was wrong, because the options set on the SSL context are propagated to the SSL object, when the SSL object is created. If they are set after the SSL object has already been created, they won't take effect until the next connection. (This is bug #14329) At least some of these could've been fixed while still using a shared context, but it would've been more complicated and error-prone. To keep things simple, let's just use a separate SSL context for each connection, and accept the overhead. Backpatch to all supported versions. Report, analysis and test case by Kacper Zuk. Discussion: <20160920101051.1355.79453@wrigleys.postgresql.org>
* Disable synchronous commits in pg_rewind.Heikki Linnakangas2016-10-06
| | | | | | | | | | | | | | | | | | If you point pg_rewind to a server that is using synchronous replication, with "pg_rewind --source-server=...", and the replication is not working for some reason, pg_rewind will get stuck because it creates a temporary table, which needs to be replicated. You could call broken replication a pilot error, but pg_rewind is often used in special circumstances, when there are changes to the replication setup. We don't do any "real" updates, and we don't care about fsyncing or replicating the operations on the temporary tables, so fix that by setting synchronous_commit off. Michael Banck, Michael Paquier. Backpatch to 9.5, where pg_rewind was introduced. Discussion: <20161005143938.GA12247@nighthawk.caipicrew.dd-dns.de>
* Correct logical decoding restore behaviour for subtransactions.Andres Freund2016-10-03
| | | | | | | | | | | | | | | | | | | Before initializing iteration over a subtransaction's changes, the last few changes were not spilled to disk. That's correct if the transaction didn't spill to disk, but otherwise... This bug can lead to missed or misorderd subtransaction contents when they were spilled to disk. Move spilling of the remaining in-memory changes to ReorderBufferIterTXNInit(), where it can easily be applied to the top transaction and, if present, subtransactions. Since this code had too many bugs already, noticeably increase test coverage. Fixes: #14319 Reported-By: Huan Ruan Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org> Backport: 9,4-, where logical decoding was added
* Show a sensible value in pg_settings.unit for GUC_UNIT_XSEGS variables.Tom Lane2016-10-03
| | | | | | | | | | | | | | | | Commit 88e982302 invented GUC_UNIT_XSEGS for min_wal_size and max_wal_size, but neglected to make it display sensibly in pg_settings.unit (by adding a case to the switch in GetConfigOptionByNum). Fix that, and adjust said switch to throw a run-time error the next time somebody forgets. In passing, avoid using a static buffer for the output string --- the rest of this function pstrdup's from a local buffer, and I see no very good reason why the units code should do it differently and less safely. Per report from Otar Shavadze. Back-patch to 9.5 where the new unit type was added. Report: <CAG-jOyA=iNFhN+yB4vfvqh688B7Tr5SArbYcFUAjZi=0Exp-Lg@mail.gmail.com>
* Fix RLS with COPY (col1, col2) FROM tabStephen Frost2016-10-03
| | | | | | | | | | | | | | | | Attempting to COPY a subset of columns from a table with RLS enabled would fail due to an invalid query being constructed (using a single ColumnRef with the list of fields to exact in 'fields', but that's for the different levels of an indirection for a single column, not for specifying multiple columns). Correct by building a ColumnRef and then RestTarget for each column being requested and then adding those to the targetList for the select query. Include regression tests to hopefully catch if this is broken again in the future. Patch-By: Adam Brightwell Reviewed-By: Michael Paquier
* Enforce a specific order for probing library loadability in pg_upgrade.Tom Lane2016-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | pg_upgrade checks whether all the shared libraries used in the old cluster are also available in the new one by issuing LOAD for each library name. Previously, it cared not what order it did the LOADs in. Ideally it should not have to care, but currently the transform modules in contrib fail unless both the language and datatype modules they depend on are loaded first. A backend-side solution for that looks possible but probably not back-patchable, so as a stopgap measure, let's do the LOAD tests in order by library name length. That should fix the problem for reasonably-named transform modules, eg "hstore_plpython" will be loaded after both "hstore" and "plpython". (Yeah, it's a hack.) In a larger sense, having a predictable order of these probes is a good thing, since it will make upgrades predictably work or not work in the face of inter-library dependencies. Also, this patch replaces O(N^2) de-duplication logic with O(N log N) logic, which could matter in installations with very many databases. So I don't foresee reverting this even after we have a proper fix for the library-dependency problem. In passing, improve a couple of SQL queries used here. Per complaint from Andrew Dunstan that pg_upgrade'ing the transform contrib modules failed. Back-patch to 9.5 where transform modules were introduced. Discussion: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
* Do ClosePostmasterPorts() earlier in SubPostmasterMain().Tom Lane2016-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | In standard Unix builds, postmaster child processes do ClosePostmasterPorts immediately after InitPostmasterChild, that is almost immediately after being spawned. This is important because we don't want children holding open the postmaster's end of the postmaster death watch pipe. However, in EXEC_BACKEND builds, SubPostmasterMain was postponing this responsibility significantly, in order to make it slightly more convenient to pass the right flag value to ClosePostmasterPorts. This is bad, particularly seeing that process_shared_preload_libraries() might invoke nearly-arbitrary code. Rearrange so that we do it as soon as we've fetched the socket FDs via read_backend_variables(). Also move the comment explaining about randomize_va_space to before the call of PGSharedMemoryReAttach, which is where it's relevant. The old placement was appropriate when the reattach happened inside CreateSharedMemoryAndSemaphores, but that was a long time ago. Back-patch to 9.3; the patch doesn't apply cleanly before that, and it doesn't seem worth a lot of effort given that we've had no actual field complaints traceable to this. Discussion: <4157.1475178360@sss.pgh.pa.us>
* Retry opening new segments in pg_xlogdump --folllowMagnus Hagander2016-09-30
| | | | | | There is a small window between when the server closes out the existing segment and the new one is created. Put a loop around the open call in this case to make sure we wait for the new file to actually appear.
* Silence compiler warningsAlvaro Herrera2016-09-28
| | | | Reported by Peter Eisentraut. Coding suggested by Tom Lane.
* worker_spi: Call pgstat_report_stat.Robert Haas2016-09-28
| | | | | | | Without this, statistics changes accumulated by the worker never get reported to the stats collector, which is bad. Julien Rouhaud
* Include <sys/select.h> where neededAlvaro Herrera2016-09-27
| | | | | | | | | | | | <sys/select.h> is required by POSIX.1-2001 to get the prototype of select(2), but nearly no systems enforce that because older standards let you get away with including some other headers. Recent OpenBSD hacking has removed that frail touch of friendliness, however, which broke some compiles; fix all the way back to 9.1 by adding the required standard. Only vacuumdb.c was reported to fail, but it seems easier to fix the whole lot in a fell swoop. Per bug #14334 by Sean Farrell.
* Install TAP test infrastructure so it's available for extension testing.Tom Lane2016-09-23
| | | | | | | | | | | | | When configured with --enable-tap-tests, "make install" will now install the Perl support files for TAP testing where PGXS will find them. This allows extensions to rely on $(prove_check) even when being built out-of-tree. Back-patch to 9.4 where we first started to support TAP testing, to reduce the number of cases extension makefiles need to consider. Craig Ringer Discussion: <CAMsr+YFXv+2qne6xJW7z_25mYBtktRX5rpkrgrb+DRgQ_FxgHQ@mail.gmail.com>
* Fix incorrect logic for excluding range constructor functions in pg_dump.Tom Lane2016-09-23
| | | | | | | | | | | | | | Faulty AND/OR nesting in the WHERE clause of getFuncs' SQL query led to dumping range constructor functions if they are part of an extension and we're in binary-upgrade mode. Actually, we don't want to dump them separately even then, since CREATE TYPE AS RANGE will create the range's constructor functions regardless. Per report from Andrew Dunstan. It looks like this mistake was introduced by me, in commit b985d4877, in perhaps-overzealous refactoring to reduce code duplication. I'm suitably embarrassed. Report: <34854939-02d7-f591-5677-ce2994104599@dunslane.net>
* Don't trust CreateFileMapping() to clear the error code on success.Tom Lane2016-09-23
| | | | | | | | | | | | | We must test GetLastError() even when CreateFileMapping() returns a non-null handle. If that value were left over from some previous system call, we might be fooled into thinking the segment already existed. Experimentation on Windows 7 suggests that CreateFileMapping() clears the error code on success, but it is not documented to do so, so let's not rely on that happening in all Windows releases. Amit Kapila Discussion: <20811.1474390987@sss.pgh.pa.us>
* Avoid using PostmasterRandom() for DSM control segment ID.Tom Lane2016-09-23
| | | | | | | | | | | | | | | Commits 470d886c3 et al intended to fix the problem that the postmaster selected the same "random" DSM control segment ID on every start. But using PostmasterRandom() for that destroys the intended property that the delay between random_start_time and random_stop_time will be unpredictable. (Said delay is probably already more predictable than we could wish, but that doesn't mean that reducing it by a couple orders of magnitude is OK.) Revert the previous patch and add a comment warning against misuse of PostmasterRandom. Fix the original problem by calling srandom() early in PostmasterMain, using a low-security seed that will later be overwritten by PostmasterRandom. Discussion: <20789.1474390434@sss.pgh.pa.us>
* Be sure to rewind the tuplestore read pointer in non-leader CTEScan nodes.Tom Lane2016-09-22
| | | | | | | | | | | | | | | | | | | | | ExecInitCteScan supposed that it didn't have to do anything to the extra tuplestore read pointer it gets from tuplestore_alloc_read_pointer. However, it needs this read pointer to be positioned at the start of the tuplestore, while tuplestore_alloc_read_pointer is actually defined as cloning the current position of read pointer 0. In normal situations that accidentally works because we initialize the whole plan tree at once, before anything gets read. But it fails in an EvalPlanQual recheck, as illustrated in bug #14328 from Dima Pavlov. To fix, just forcibly rewind the pointer after tuplestore_alloc_read_pointer. The cost of doing so is negligible unless the tuplestore is already in TSS_READFILE state, which wouldn't happen in normal cases. We could consider altering tuplestore's API to make that case cheaper, but that would make for a more invasive back-patch and it doesn't seem worth it. This has been broken probably for as long as we've had CTEs, so back-patch to all supported branches. Discussion: <32468.1474548308@sss.pgh.pa.us>
* Fix pgbench's calculation of average latency, when -T is not used.Heikki Linnakangas2016-09-21
| | | | | | | | | | | If the test duration was given in # of transactions (-t or no option), rather as a duration (-T), the latency average was always printed as 0. It has been broken ever since the display of latency average was added, in 9.4. Fabien Coelho Discussion: <alpine.DEB.2.20.1607131015370.7486@sto>
* Use PostmasterRandom(), not random(), for DSM control segment ID.Robert Haas2016-09-20
| | | | | Otherwise, every startup gets the same "random" value, which is definitely not what was intended.
* Retry DSM control segment creation if Windows indicates access denied.Robert Haas2016-09-20
| | | | | | | | | | | | | | | Otherwise, attempts to run multiple postmasters running on the same machine may fail, because Windows sometimes returns ERROR_ACCESS_DENIED rather than ERROR_ALREADY_EXISTS when there is an existing segment. Hitting this bug is much more likely because of another defect not fixed by this patch, namely that dsm_postmaster_startup() uses random() which returns the same value every time. But that's not a reason not to fix this. Kyotaro Horiguchi and Amit Kapila, reviewed by Michael Paquier Discussion: <CAA4eK1JyNdMeF-dgrpHozDecpDfsRZUtpCi+1AbtuEkfG3YooQ@mail.gmail.com>
* Fix outdated comments, GIST search queue is not an RBTree anymore.Heikki Linnakangas2016-09-20
| | | | | | The GiST search queue is implemented as a pairing heap rather than as Red-Black Tree, since 9.5 (commit e7032610). I neglected these comments in that commit.
* Fix latency calculation when there are \sleep commands in the script.Heikki Linnakangas2016-09-19
| | | | | | | | | | | We can't use txn_scheduled to hold the sleep-until time for \sleep, because that interferes with calculation of the latency of the transaction as whole. Backpatch to 9.4, where this bug was introduced. Fabien COELHO Discussion: <alpine.DEB.2.20.1608231622170.7102@lancre>
* MSVC: Include pg_recvlogical in client-only install.Robert Haas2016-09-19
| | | | MauMau, reviewed by Michael Paquier
* Fix ecpg -? option on Windows, add -V alias for --version.Heikki Linnakangas2016-09-18
| | | | | | | | | | | | | This makes the -? and -V options work consistently with other binaries. --help and --version are now only recognized as the first option, i.e. "ecpg --foobar --help" no longer prints the help, but that's consistent with most of our other binaries, too. Backpatch to all supported versions. Haribabu Kommi Discussion: <CAJrrPGfnRXvmCzxq6Dy=stAWebfNHxiL+Y_z7uqksZUCkW_waQ@mail.gmail.com>
* Fix building with LibreSSL.Heikki Linnakangas2016-09-15
| | | | | | | | | | | | | | | | LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0, but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf checks for the individual functions we need, and stop relying on OPENSSL_VERSION_NUMBER. Backport to 9.5 and 9.6, like the patch that broke this. In the back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left, to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL has all those functions, so they work as intended. Per buildfarm member curculio. Discussion: <2442.1473957669@sss.pgh.pa.us>
* Support OpenSSL 1.1.0.Heikki Linnakangas2016-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes needed to build at all: - Check for SSL_new in configure, now that SSL_library_init is a macro. - Do not access struct members directly. This includes some new code in pgcrypto, to use the resource owner mechanism to ensure that we don't leak OpenSSL handles, now that we can't embed them in other structs anymore. - RAND_SSLeay() -> RAND_OpenSSL() Changes that were needed to silence deprecation warnings, but were not strictly necessary: - RAND_pseudo_bytes() -> RAND_bytes(). - SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl() - ASN1_STRING_data() -> ASN1_STRING_get0_data() - DH_generate_parameters() -> DH_generate_parameters() - Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good riddance!) Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER, for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time immemorial. Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have the "CA:true" basic constraint extension now, or OpenSSL will refuse them. Regenerate the test certificates with that. The "openssl" binary, used to generate the certificates, is also now more picky, and throws an error if an X509 extension is specified in "req_extensions", but that section is empty. Backpatch to 9.5 and 9.6, per popular demand. The file structure was somewhat different in earlier branches, so I didn't bother to go further than that. In back-branches, we still support OpenSSL 0.9.7 and above. OpenSSL 0.9.6 should still work too, but I didn't test it. In master, we only support 0.9.8 and above. Patch by Andreas Karlsson, with additional changes by me. Discussion: <20160627151604.GD1051@msg.df7cb.de>
* Fix copy/pasto in file identificationSimon Riggs2016-09-12
| | | | Daniel Gustafsson
* Improve unreachability recognition in elog() macro.Tom Lane2016-09-10
| | | | | | | | | | | | | | Some experimentation with an older version of gcc showed that it is able to determine whether "if (elevel_ >= ERROR)" is compile-time constant if elevel_ is declared "const", but otherwise not so much. We had accounted for that in ereport() but were too miserly with braces to make it so in elog(). I don't know how many currently-interesting compilers have the same quirk, but in case it will save some code space, let's make sure that elog() is on the same footing as ereport() for this purpose. Back-patch to 9.3 where we introduced pg_unreachable() calls into elog/ereport.
* Fix miserable coding in pg_stat_get_activity().Tom Lane2016-09-10
| | | | | | | | | | | | | | | | | | | | | | | | Commit dd1a3bccc replaced a test on whether a subroutine returned a null pointer with a test on whether &pointer->backendStatus was null. This accidentally failed to fail, at least on common compilers, because backendStatus is the first field in the struct; but it was surely trouble waiting to happen. Commit f91feba87 then messed things up further, changing the logic to local_beentry = pgstat_fetch_stat_local_beentry(curr_backend); if (!local_beentry) continue; beentry = &local_beentry->backendStatus; if (!beentry) { where the second "if" is now dead code, so that the intended behavior of printing a row with "<backend information not available>" cannot occur. I suspect this is all moot because pgstat_fetch_stat_local_beentry will never actually return null in this function's usage, but it's still very poor coding. Repair back to 9.4 where the original problem was introduced.
* Fix locking a tuple updated by an aborted (sub)transactionAlvaro Herrera2016-09-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When heap_lock_tuple decides to follow the update chain, it tried to also lock any version of the tuple that was created by an update that was subsequently rolled back. This is pointless, since for all intents and purposes that tuple exists no more; and moreover it causes misbehavior, as reported independently by Marko Tiikkaja and Marti Raudsepp: some SELECT FOR UPDATE/SHARE queries may fail to return the tuples, and assertion-enabled builds crash. Fix by having heap_lock_updated_tuple test the xmin and return success immediately if the tuple was created by an aborted transaction. The condition where tuples become invisible occurs when an updated tuple chain is followed by heap_lock_updated_tuple, which reports the problem as HeapTupleSelfUpdated to its caller heap_lock_tuple, which in turn propagates that code outwards possibly leading the calling code (ExecLockRows) to believe that the tuple exists no longer. Backpatch to 9.3. Only on 9.5 and newer this leads to a visible failure, because of commit 27846f02c176; before that, heap_lock_tuple skips the whole dance when the tuple is already locked by the same transaction, because of the ancient HeapTupleSatisfiesUpdate behavior. Still, the buggy condition may also exist in more convoluted scenarios involving concurrent transactions, so it seems safer to fix the bug in the old branches too. Discussion: https://www.postgresql.org/message-id/CABRT9RC81YUf1=jsmWopcKJEro=VoeG2ou6sPwyOUTx_qteRsg@mail.gmail.com https://www.postgresql.org/message-id/48d3eade-98d3-8b9a-477e-1a8dc32a724d@joh.to
* Fix VACUUM_TRUNCATE_LOCK_WAIT_INTERVALSimon Riggs2016-09-09
| | | | | | | | | | lazy_truncate_heap() was waiting for VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL, but in microseconds not milliseconds as originally intended. Found by code inspection. Simon Riggs
* Fix mdtruncate() to close fd.c handle of deleted segments.Andres Freund2016-09-08
| | | | | | | | | | | | | | | | | mdtruncate() forgot to FileClose() a segment's mdfd_vfd, when deleting it. That lead to a fd.c handle to a truncated file being kept open until backend exit. The issue appears to have been introduced way back in 1a5c450f3024ac5, before that the handle was closed inside FileUnlink(). The impact of this bug is limited - only VACUUM and ON COMMIT TRUNCATE for temporary tables, truncate files in place (i.e. TRUNCATE itself is not affected), and the relation has to be bigger than 1GB. The consequences of a leaked fd.c handle aren't severe either. Discussion: <20160908220748.oqh37ukwqqncbl3n@alap3.anarazel.de> Backpatch: all supported releases
* Don't print database's tablespace in pg_dump -C --no-tablespaces output.Tom Lane2016-09-08
| | | | | | | | | | | | | | | If the database has a non-default tablespace, we emitted a TABLESPACE clause in the CREATE DATABASE command emitted by -C, even if --no-tablespaces was also specified. This seems wrong, and it's inconsistent with what pg_dumpall does, so change it. Per bug #14315 from Danylo Hlynskyi. Back-patch to 9.5. The bug is much older, but it'd be a more invasive change before 9.5 because dumpDatabase() hasn't got an easy way to get to the outputNoTablespaces flag. Doesn't seem worth the work given the lack of previous complaints. Report: <20160908081953.1402.75347@wrigleys.postgresql.org>
* Add regression test coverage for non-default timezone abbreviation sets.Tom Lane2016-09-04
| | | | | | | | | | After further reflection about the mess cleaned up in commit 39b691f25, I decided the main bit of test coverage that was still missing was to check that the non-default abbreviation-set files we supply are usable. Add that. Back-patch to supported branches, just because it seems like a good idea to keep this all in sync.