aboutsummaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAge
* Fix SHLIB_PREREQS use in contrib, allowing PGXS buildsPeter Eisentraut2014-12-04
| | | | | | | | | | | | | | | | dblink and postgres_fdw use SHLIB_PREREQS = submake-libpq to build libpq first. This doesn't work in a PGXS build, because there is no libpq to build. So just omit setting SHLIB_PREREQS in this case. Note that PGXS users can still use SHLIB_PREREQS (although it is not documented). The problem here is only that contrib modules can be built in-tree or using PGXS, and the prerequisite is only applicable in the former case. Commit 6697aa2bc25c83b88d6165340348a31328c35de6 previously attempted to address this by creating a somewhat fake submake-libpq target in Makefile.global. That was not the right fix, and it was also done in a nonportable way, so revert that.
* Don't skip SQL backends in logical decoding for visibility computation.Andres Freund2014-12-02
| | | | | | | | | | | | | | | | | | The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag PGXACT flag, that allows such backends to be skipped when computing the xmin horizon/snapshots. That's fine and sensible for walsenders streaming out logical changes, but not at all fine for SQL backends doing logical decoding. If the latter set that flag any change they have performed outside of logical decoding will not be regarded as visible - which e.g. can lead to that change being vacuumed away. Note that not setting the flag for SQL backends isn't particularly bothersome - the SQL backend doesn't do streaming, so it only runs for a limited amount of time. Per buildfarm member 'tick' and Alvaro. Backpatch to 9.4, where logical decoding was introduced.
* Fix hstore_to_json_loose's detection of valid JSON number values.Andrew Dunstan2014-12-01
| | | | | | | | | | We expose a function IsValidJsonNumber that internally calls the lexer for json numbers. That allows us to use the same test everywhere, instead of inventing a broken test for hstore conversions. The new function is also used in datum_to_json, replacing the code that is now moved to the new function. Backpatch to 9.3 where hstore_to_json_loose was introduced.
* Free libxml2/libxslt resources in a safer order.Tom Lane2014-11-27
| | | | | | | | | | | | | | | | | Mark Simonetti reported that libxslt sometimes crashes for him, and that swapping xslt_process's object-freeing calls around to do them in reverse order of creation seemed to fix it. I've not reproduced the crash, but valgrind clearly shows a reference to already-freed memory, which is consistent with the idea that shutdown of the xsltTransformContext is trying to reference the already-freed stylesheet or input document. With this patch, valgrind is no longer unhappy. I have an inquiry in to see if this is a libxslt bug or if we're just abusing the library; but even if it's a library bug, we'd want to adjust our code so it doesn't fail with unpatched libraries. Back-patch to all supported branches, because we've been doing this in the wrong(?) order for a long time.
* Fix mishandling of system columns in FDW queries.Tom Lane2014-11-22
| | | | | | | | | | | | | | | | | | | | postgres_fdw would send query conditions involving system columns to the remote server, even though it makes no effort to ensure that system columns other than CTID match what the remote side thinks. tableoid, in particular, probably won't match and might have some use in queries. Hence, prevent sending conditions that include non-CTID system columns. Also, create_foreignscan_plan neglected to check local restriction conditions while determining whether to set fsSystemCol for a foreign scan plan node. This again would bollix the results for queries that test a foreign table's tableoid. Back-patch the first fix to 9.3 where postgres_fdw was introduced. Back-patch the second to 9.2. The code is probably broken in 9.1 as well, but the patch doesn't apply cleanly there; given the weak state of support for FDWs in 9.1, it doesn't seem worth fixing. Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
* Avoid file descriptor leak in pg_test_fsync.Robert Haas2014-11-19
| | | | | | | This can cause problems on Windows, where files that are still open can't be unlinked. Jeff Janes
* Fix and improve cache invalidation logic for logical decoding.Andres Freund2014-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are basically three situations in which logical decoding needs to perform cache invalidation. During/After replaying a transaction with catalog changes, when skipping a uninteresting transaction that performed catalog changes and when erroring out while replaying a transaction. Unfortunately these three cases were all done slightly differently - partially because 8de3e410fa, which greatly simplifies matters, got committed in the midst of the development of logical decoding. The actually problematic case was when logical decoding skipped transaction commits (and thus processed invalidations). When used via the SQL interface cache invalidation could access the catalog - bad, because we didn't set up enough state to allow that correctly. It'd not be hard to setup sufficient state, but the simpler solution is to always perform cache invalidation outside a valid transaction. Also make the different cache invalidation cases look as similar as possible, to ease code review. This fixes the assertion failure reported by Antonin Houska in 53EE02D9.7040702@gmail.com. The presented testcase has been expanded into a regression test. Backpatch to 9.4, where logical decoding was introduced.
* Fix several weaknesses in slot and logical replication on-disk serialization.Andres Freund2014-11-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and snapbuild.c were missing the FIN_CRC32 call when computing/checking checksums of on disk files. That doesn't lower the the error detection capabilities of the checksum, but is inconsistent with other usages. In a followup mail Heikki also noticed that, contrary to a comment, the 'version' and 'length' struct fields of replication slot's on disk data where not covered by the checksum. That's not likely to lead to actually missed corruption as those fields are cross checked with the expected version and the actual file length. But it's wrong nonetheless. As fixing these issues makes existing on disk files unreadable, bump the expected versions of on disk files for both slots and logical decoding historic catalog snapshots. This means that loading old files will fail with ERROR: "replication slot file ... has unsupported version 1" and ERROR: "snapbuild state file ... has unsupported version 1 instead of 2" respectively. Given the low likelihood of anybody already using these new features in a production setup that seems acceptable. Fixing these issues made me notice that there's no regression test covering the loading of historic snapshot from disk - so add one. Backpatch to 9.4 where these features were introduced.
* Add interrupt checks to contrib/pg_prewarm.Andres Freund2014-11-12
| | | | | | | | Currently the extension's pg_prewarm() function didn't check interrupts once it started "warming" data. Since individual calls can take a long while it's important for them to be interruptible. Backpatch to 9.4 where pg_prewarm was introduced.
* Loop when necessary in contrib/pgcrypto's pktreader_pull().Tom Lane2014-11-11
| | | | | | | | This fixes a scenario in which pgp_sym_decrypt() failed with "Wrong key or corrupt data" on messages whose length is 6 less than a power of 2. Per bug #11905 from Connor Penhale. Fix by Marko Tiikkaja, regression test case from Jeff Janes.
* Fix volatility markings of some contrib I/O functions.Tom Lane2014-11-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | In general, datatype I/O functions are supposed to be immutable or at worst stable. Some contrib I/O functions were, through oversight, not marked with any volatility property at all, which made them VOLATILE. Since (most of) these functions actually behave immutably, the erroneous marking isn't terribly harmful; but it can be user-visible in certain circumstances, as per a recent bug report from Joe Van Dyk in which a cast to text was disallowed in an expression index definition. To fix, just adjust the declarations in the extension SQL scripts. If we were being very fussy about this, we'd bump the extension version numbers, but that seems like more trouble (for both developers and users) than the problem is worth. A fly in the ointment is that chkpass_in actually is volatile, because of its use of random() to generate a fresh salt when presented with a not-yet-encrypted password. This is bad because of the general assumption that I/O functions aren't volatile: the consequence is that records or arrays containing chkpass elements may have input behavior a bit different from a bare chkpass column. But there seems no way to fix this without breaking existing usage patterns for chkpass, and the consequences of the inconsistency don't seem bad enough to justify that. So for the moment, just document it in a comment. Since we're not bumping version numbers, there seems no harm in back-patching these fixes; at least future installations will get the functions marked correctly.
* Docs: fix incorrect spelling of contrib/pgcrypto option.Tom Lane2014-11-03
| | | | | | | | pgp_sym_encrypt's option is spelled "sess-key", not "enable-session-key". Spotted by Jeff Janes. In passing, improve a comment in pgp-pgsql.c to make it clearer that the debugging options are intentionally undocumented.
* Make the locale comparison in pg_upgrade more lenientHeikki Linnakangas2014-10-24
| | | | | | | | | | | | | | | | | If the locale names are not equal, try to canonicalize both of them by passing them to setlocale(). Before, we only canonicalized the old cluster's locale if upgrading from a 8.4-9.2 server, but we also need to canonicalize when upgrading from a pre-8.4 server. That was an oversight in the code. But we should also canonicalize on newer server versions, so that we cope if the canonical form changes from one release to another. I'm about to do just that to fix bug #11431, by mapping a locale name that contains non-ASCII characters to a pure-ASCII alias of the same locale. This is partial backpatch of commit 33755e8edf149dabfc0ed9b697a84f70b0cca0de in master. Apply to 9.2, 9.3 and 9.4. The canonicalization code didn't exist before 9.2. In 9.2 and 9.3, this effectively also back-patches the changes from commit 58274728fb8e087049df67c0eee903d9743fdeda, to be more lax about the spelling of the encoding in the locale names.
* Support timezone abbreviations that sometimes change.Tom Lane2014-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, PG has assumed that any given timezone abbreviation (such as "EDT") represents a constant GMT offset in the usage of any particular region; we had a way to configure what that offset was, but not for it to be changeable over time. But, as with most things horological, this view of the world is too simplistic: there are numerous regions that have at one time or another switched to a different GMT offset but kept using the same timezone abbreviation. Almost the entire Russian Federation did that a few years ago, and later this month they're going to do it again. And there are similar examples all over the world. To cope with this, invent the notion of a "dynamic timezone abbreviation", which is one that is referenced to a particular underlying timezone (as defined in the IANA timezone database) and means whatever it currently means in that zone. For zones that use or have used daylight-savings time, the standard and DST abbreviations continue to have the property that you can specify standard or DST time and get that time offset whether or not DST was theoretically in effect at the time. However, the abbreviations mean what they meant at the time in question (or most recently before that time) rather than being absolutely fixed. The standard abbreviation-list files have been changed to use this behavior for abbreviations that have actually varied in meaning since 1970. The old simple-numeric definitions are kept for abbreviations that have not changed, since they are a bit faster to resolve. While this is clearly a new feature, it seems necessary to back-patch it into all active branches, because otherwise use of Russian zone abbreviations is going to become even more problematic than it already was. This change supersedes the changes in commit 513d06ded et al to modify the fixed meanings of the Russian abbreviations; since we've not shipped that yet, this will avoid an undesirably incompatible (not to mention incorrect) change in behavior for timestamps between 2011 and 2014. This patch makes some cosmetic changes in ecpglib to keep its usage of datetime lookup tables as similar as possible to the backend code, but doesn't do anything about the increasingly obsolete set of timezone abbreviation definitions that are hard-wired into ecpglib. Whatever we do about that will likely not be appropriate material for back-patching. Also, a potential free() of a garbage pointer after an out-of-memory failure in ecpglib has been fixed. This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that caused it to produce unexpected results near a timezone transition, if both the "before" and "after" states are marked as standard time. We'd only ever thought about or tested transitions between standard and DST time, but that's not what's happening when a zone simply redefines their base GMT offset. In passing, update the SGML documentation to refer to the Olson/zoneinfo/ zic timezone database as the "IANA" database, since it's now being maintained under the auspices of IANA.
* Print planning time only in EXPLAIN ANALYZE, not plain EXPLAIN.Tom Lane2014-10-15
| | | | | | | | | | | | We've gotten enough push-back on that change to make it clear that it wasn't an especially good idea to do it like that. Revert plain EXPLAIN to its previous behavior, but keep the extra output in EXPLAIN ANALYZE. Per discussion. Internally, I set this up as a separate flag ExplainState.summary that controls printing of planning time and execution time. For now it's just copied from the ANALYZE option, but we could consider exposing it to users.
* Fix typo in error message.Heikki Linnakangas2014-10-02
|
* Improve documentation about binary/textual output mode for output plugins.Andres Freund2014-10-01
| | | | | | | | | Discussion: CAB7nPqQrqFzjqCjxu4GZzTrD9kpj6HMn9G5aOOMwt1WZ8NfqeA@mail.gmail.com, CAB7nPqQXc_+g95zWnqaa=mVQ4d3BVRs6T41frcEYi2ocUrR3+A@mail.gmail.com Per discussion between Michael Paquier, Robert Haas and Andres Freund Backpatch to 9.4 where logical decoding was introduced.
* pg_upgrade: have pg_upgrade fail for old 9.4 JSONB formatBruce Momjian2014-09-29
| | | | Backpatch through 9.4
* Fix failure of contrib/auto_explain to print per-node timing information.Tom Lane2014-09-19
| | | | | | | | | | | | | This has been broken since commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf, which added the EXPLAIN (TIMING) option. Although that commit included updates to auto_explain, they evidently weren't tested very carefully, because the code failed to print node timings even when it should, due to failure to set es.timing in the ExplainState struct. Reported off-list by Neelakanth Nadgir of Salesforce. In passing, clean up the documentation for auto_explain's options a little bit, including re-ordering them into what seems to me a more logical order.
* pg_upgrade: preserve the timestamp epochBruce Momjian2014-09-11
| | | | | | | | | This is useful for replication tools like Slony and Skytools. This is a backpatch of a74a4aa23bb95b590ff01ee564219d2eacea3706. Report by Sergey Konoplev Backpatch through 9.3
* Fix Windows build.Heikki Linnakangas2014-09-11
| | | | I renamed a variable, but missed an #ifdef WIN32 block.
* Simplify calculation of Poisson distributed delays in pgbench --rate mode.Heikki Linnakangas2014-09-11
| | | | | | | | | | | | | | | | | The previous coding first generated a uniform random value between 0.0 and 1.0, then converted that to an integer between 1 and 10000, and divided that again by 10000. Those conversions are unnecessary; we can use the double value that pg_erand48() returns directly. While we're at it, put the logic into a helper function, getPoissonRand(). The largest delay generated by the old coding was about 9.2 times the average, because of the way the uniformly distributed value used for the calculation was truncated to 1/10000 granularity. The new coding doesn't have such clamping. With my laptop's DBL_MIN value, the maximum delay with the new coding is about 700x the average. That seems acceptable - any reasonable pgbench session should last long enough to average that out. Backpatch to 9.4.
* Change the way latency is calculated with pgbench --rate option.Heikki Linnakangas2014-09-11
| | | | | | | | | | | | | | The reported latency values now include the "schedule lag" time, that is, the time between the transaction's scheduled start time and the time it actually started. This relates better to a model where requests arrive at a certain rate, and we are interested in the response time to the end user or application, rather than the response time of the database itself. Also, when --rate is used, include the schedule lag time in the log output. The --rate option is new in 9.4, so backpatch to 9.4. It seems better to make this change in 9.4, while we're still in the beta period, than ship a 9.4 version that calculates the values differently than 9.5.
* doc: Reflect renaming of Mac OS X to OS XPeter Eisentraut2014-09-09
| | | | bug #10528
* Assorted message fixes and improvementsPeter Eisentraut2014-09-05
|
* Add skip-empty-xacts option to test_decoding for use in the regression tests.Andres Freund2014-09-01
| | | | | | | | | | | | | | | | | | | The regression tests for contrib/test_decoding regularly failed on postgres instances that were very slow. Either because the hardware itself was slow or because very expensive debugging options like CLOBBER_CACHE_ALWAYS were used. The reason they failed was just that some additional transactions were decoded. Analyze and vacuum, triggered by autovac. To fix just add a option to test_decoding to only display transactions in which a change was actually displayed. That's not pretty because it removes information from the tests; but better than constantly failing tests in very likely harmless ways. Backpatch to 9.4 where logical decoding was introduced. Discussion: 20140629142511.GA26930@awork2.anarazel.de
* Fix citext upgrade script for disallowance of oidvector element assignment.Tom Lane2014-08-28
| | | | | | | | | | | | | | | | | | | In commit 45e02e3232ac7cc5ffe36f7986159b5e0b1f6fdc, we intentionally disallowed updates on individual elements of oidvector columns. While that still seems like a sane idea in the abstract, we (I) forgot that citext's "upgrade from unpackaged" script did in fact perform exactly such updates, in order to fix the problem that citext indexes should have a collation but would not in databases dumped or upgraded from pre-9.1 installations. Even if we wanted to add casts to allow such updates, there's no practical way to do so in the back branches, so the only real alternative is to make citext's kluge even klugier. In this patch, I cast the oidvector to text, fix its contents with regexp_replace, and cast back to oidvector. (Ugh!) Since the aforementioned commit went into all active branches, we have to fix this in all branches that contain the now-broken update script. Per report from Eric Malm.
* Fix typos in some error messages thrown by extension scripts when fed to psql.Andres Freund2014-08-25
| | | | | | | | | | Some of the many error messages introduced in 458857cc missed 'FROM unpackaged'. Also e016b724 and 45ffeb7e forgot to quote extension version numbers. Backpatch to 9.1, just like 458857cc which introduced the messages. Do so because the error messages thrown when the wrong command is copy & pasted aren't easy to understand.
* Don't track DEALLOCATE in pg_stat_statements.Heikki Linnakangas2014-08-25
| | | | | | | | | | We also don't track PREPARE, nor do we track planning time in general, so let's ignore DEALLOCATE as well for consistency. Backpatch to 9.4, but not further than that. Although it seems unlikely that anyone is relying on the current behavior, this is a behavioral change. Fabien Coelho
* pg_upgrade: prevent oid conflicts with new-cluster TOAST tablesBruce Momjian2014-08-07
| | | | | | | | Previously, TOAST tables only required in the new cluster could cause oid conflicts if they were auto-numbered and a later conflicting oid had to be assigned. Backpatch through 9.3
* pg_upgrade: remove reference to autovacuum_multixact_freeze_max_ageBruce Momjian2014-08-04
| | | | | | | | | | | | | | | autovacuum_multixact_freeze_max_age was added as a pg_ctl start parameter in 9.3.X to prevent autovacuum from running. However, only some 9.3.X releases have autovacuum_multixact_freeze_max_age as it was added in a minor PG 9.3 release. It also isn't needed because -b turns off autovacuum in 9.1+. Without this fix, trying to upgrade from an early 9.3 release to 9.4 would fail. Report by EDB Backpatch through 9.3
* Check block number against the correct fork in get_raw_page().Tom Lane2014-07-22
| | | | | | | | | | | | | | | | | | get_raw_page tried to validate the supplied block number against RelationGetNumberOfBlocks(), which of course is only right when accessing the main fork. In most cases, the main fork is longer than the others, so that the check was too weak (allowing a lower-level error to be reported, but no real harm to be done). However, very small tables could have an FSM larger than their heap, in which case the mistake prevented access to some FSM pages. Per report from Torsten Foertsch. In passing, make the bad-block-number error into an ereport not elog (since it's certainly not an internal error); and fix sloppily maintained comment for RelationGetNumberOfBlocksInFork. This has been wrong since we invented relation forks, so back-patch to all supported branches.
* Diagnose incompatible OpenLDAP versions during build and test.Noah Misch2014-07-22
| | | | | | | With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL backends can crash at exit. Raise a warning during "configure" based on the compile-time OpenLDAP version number, and test the crash scenario in the dblink test suite. Back-patch to 9.0 (all supported versions).
* pg_upgrade: Fix spacing in help outputPeter Eisentraut2014-07-15
|
* Fix decoding of consecutive MULTI_INSERTs emitted by one heap_multi_insert().Andres Freund2014-07-12
| | | | | | | | | | | | | | | | | | | Commit 1b86c81d2d fixed the decoding of toasted columns for the rows contained in one xl_heap_multi_insert record. But that's not actually enough, because heap_multi_insert() will actually first toast all passed in rows and then emit several *_multi_insert records; one for each page it fills with tuples. Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in xl_heap_multi_insert->flag denoting that this multi_insert record is the last emitted by one heap_multi_insert() call. Then use that flag in decode.c to only set clear_toast_afterwards in the right situation. Expand the number of rows inserted via COPY in the corresponding regression test to make sure that more than one heap page is filled with tuples by one heap_multi_insert() call. Backpatch to 9.4 like the previous commit.
* pg_upgrade: allow upgrades for new-only TOAST tablesBruce Momjian2014-07-07
| | | | | | | | | | Previously, when calculations on the need for toast tables changed, pg_upgrade could not handle cases where the new cluster needed a TOAST table and the old cluster did not. (It already handled the opposite case.) This fixes the "OID mismatch" error typically generated in this case. Backpatch through 9.2
* Fix decoding of MULTI_INSERTs when rows other than the last are toasted.Andres Freund2014-07-06
| | | | | | | | | | | | | | | | | | | | | | | When decoding the results of a HEAP2_MULTI_INSERT (currently only generated by COPY FROM) toast columns for all but the last tuple weren't replaced by their actual contents before being handed to the output plugin. The reassembled toast datums where disregarded after every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct for plain inserts, updates, deletes, but not multi inserts - there we generate several REORDER_BUFFER_CHANGE_INSERTs for a single xl_heap_multi_insert record. To solve the problem add a clear_toast_afterwards boolean to ReorderBufferChange's union member that's used by modifications. All row changes but multi_inserts always set that to true, but multi_insert sets it only for the last change generated. Add a regression test covering decoding of multi_inserts - there was none at all before. Backpatch to 9.4 where logical decoding was introduced. Bug found by Petr Jelinek.
* Consistently pass an "unsigned char" to ctype.h functions.Noah Misch2014-07-06
| | | | | | The isxdigit() calls relied on undefined behavior. The isascii() call was well-defined, but our prevailing style is to include the cast. Back-patch to 9.4, where the isxdigit() calls were introduced.
* pg_upgrade: preserve database and relation minmxid valuesBruce Momjian2014-07-02
| | | | | | | | | Also set these values for pre-9.3 old clusters that don't have values to preserve. Analysis by Alvaro Backpatch through 9.3
* pg_upgrade: no need to remove "members" files for pre-9.3 upgradesBruce Momjian2014-07-02
| | | | | | Per analysis by Alvaro Backpatch through 9.3
* Fix inadequately-sized output buffer in contrib/unaccent.Tom Lane2014-07-01
| | | | | | | | | | | | | | | | | The output buffer size in unaccent_lexize() was calculated as input string length times pg_database_encoding_max_length(), which effectively assumes that replacement strings aren't more than one character. While that was all that we previously documented it to support, the code actually has always allowed replacement strings of arbitrary length; so if you tried to make use of longer strings, you were at risk of buffer overrun. To fix, use an expansible StringInfo buffer instead of trying to determine the maximum space needed a-priori. This would be a security issue if unaccent rules files could be installed by unprivileged users; but fortunately they can't, so in the back branches the problem can be labeled as improper configuration by a superuser. Nonetheless, a memory stomp isn't a nice way of reacting to improper configuration, so let's back-patch the fix.
* pg_upgrade: update C comments about pg_dumpallBruce Momjian2014-06-30
| | | | | | | | There were some C comments that hadn't been updated from the switch of using only pg_dumpall to using pg_dump and pg_dumpall, so update them. Also, don't bother using --schema-only for pg_dumpall --globals-only. Backpatch through 9.4
* Don't prematurely free the BufferAccessStrategy in pgstat_heap().Noah Misch2014-06-30
| | | | | | | This function continued to use it after heap_endscan() freed it. In passing, don't explicit create a strategy here. Instead, use the one created by heap_beginscan_strat(), if any. Back-patch to 9.2, where use of a BufferAccessStrategy here was introduced.
* pg_upgrade: remove pg_multixact files left by initdbBruce Momjian2014-06-24
| | | | | | | | This fixes a bug that caused vacuum to fail when the '0000' files left by initdb were accessed as part of vacuum's cleanup of old pg_multixact files. Backpatch through 9.3
* Clean up data conversion short-lived memory context.Joe Conway2014-06-20
| | | | | | | | | | dblink uses a short-lived data conversion memory context. However it was not deleted when no longer needed, leading to a noticeable memory leak under some circumstances. Plug the hole, along with minor refactoring. Backpatch to 9.2 where the leak was introduced. Report and initial patch by MauMau. Reviewed/modified slightly by Tom Lane and me.
* Secure Unix-domain sockets of "make check" temporary clusters.Noah Misch2014-06-14
| | | | | | | | | | | | | | | | | Any OS user able to access the socket can connect as the bootstrap superuser and proceed to execute arbitrary code as the OS user running the test. Protect against that by placing the socket in a temporary, mode-0700 subdirectory of /tmp. The pg_regress-based test suites and the pg_upgrade test suite were vulnerable; the $(prove_check)-based test suites were already secure. Back-patch to 8.4 (all supported versions). The hazard remains wherever the temporary cluster accepts TCP connections, notably on Windows. As a convenient side effect, this lets testing proceed smoothly in builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values like /var/run/postgresql are often unwritable to the build user. Security: CVE-2014-0067
* Save pg_stat_statements statistics file into $PGDATA/pg_stat directory at ↵Fujii Masao2014-06-04
| | | | | | | | | | | | | shutdown. 187492b6c2e8cafc5b39063ca3b67846e8155d24 changed pgstat.c so that the stats files were saved into $PGDATA/pg_stat directory when the server was shutdowned. But it accidentally forgot to change the location of pg_stat_statements permanent stats file. This commit fixes pg_stat_statements so that its stats file is also saved into $PGDATA/pg_stat at shutdown. Since this fix changes the file layout, we don't back-patch it to 9.3 where this oversight was introduced.
* When using the OSSP UUID library, cache its uuid_t state object.Tom Lane2014-05-29
| | | | | | | | | | | | | | | | The original coding in contrib/uuid-ossp created and destroyed a uuid_t object (or, in some cases, even two of them) each time it was called. This is not the intended usage: you're supposed to keep the uuid_t object around so that the library can cache its state across uses. (Other UUID libraries seem to keep equivalent state behind-the-scenes in static variables, but OSSP chose differently.) Aside from being quite inefficient, creating a new uuid_t loses knowledge of the previously generated UUID, which in theory could result in duplicate V1-style UUIDs being created on sufficiently fast machines. On at least some platforms, creating a new uuid_t also draws some entropy from /dev/urandom, leaving less for the rest of the system. This seems sufficiently unpleasant to justify back-patching this change.
* Fix uuid-ossp regression tests based on buildfarm feedback.Tom Lane2014-05-28
| | | | | | | | | | | | | | | | | | | The previous version of these tests expected uuid_generate_v1() to always emit MAC addresses with the local-admin and multicast address bits zero. However, several of the buildfarm critters are reporting values with the local-admin bit set. (Perhaps they're running inside VMs or jails.) And a couple are reporting values with the multicast bit set, probably meaning that the UUID library couldn't read the system MAC address. Also, it emerges that if OSSP UUID can't read the system MAC address, it falls back to V1MC behavior wherein the whole node field gets randomized each time, breaking the test that expected the node field to remain stable in V1 output. (It looks like e2fs doesn't behave that way, though.) It's not entirely clear why we can't get a system MAC address, since the buildfarm scripts would not work without internet access. Nonetheless, the regression tests had better cope with the case, so adjust the tests to expect these behaviors.
* Revert "Fix bogus %name-prefix option syntax in all our Bison files."Tom Lane2014-05-28
| | | | | | | | | | | | This reverts commit 45b7abe59e9485657ac9380f35d2d917dd0da25b. It turns out that the %name-prefix syntax without "=" does not work at all in pre-2.4 Bison. We are not prepared to make such a large jump in minimum required Bison version just to suppress a warning message in a version hardly any developers are using yet. When 3.0 gets more popular, we'll figure out a way to deal with this. In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for anyone using 3.0 who doesn't want to see the warning.