aboutsummaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAge
...
* Fix pg_file_write() error handling.Noah Misch2017-03-12
| | | | | | | | Detect fclose() failures; given "ln -s /dev/full $PGDATA/devfull", "pg_file_write('devfull', 'x', true)" now fails as it should. Don't leak a stream when fwrite() fails. Remove a born-ineffective test that aimed to skip zero-length writes. Back-patch to 9.2 (all supported versions).
* Fix ancient connection leak in dblinkJoe Conway2017-03-11
| | | | | | | | | | | | | | | When using unnamed connections with dblink, every time a new connection is made, the old one is leaked. Fix that. This has been an issue probably since dblink was first committed. Someone complained almost ten years ago, but apparently I decided not to pursue it at the time, and neither did anyone else, so it slipped between the cracks. Now that someone else has complained, fix in all supported branches. Discussion: (orig) https://postgr.es/m/flat/F680AB59-6D6F-4026-9599-1BE28880273D%40decibel.org#F680AB59-6D6F-4026-9599-1BE28880273D@decibel.org Discussion: (new) https://postgr.es/m/flat/0A3221C70F24FB45833433255569204D1F6ADF8C@G01JPEXMBYT05 Reported by: Jim Nasby and Takayuki Tsunakawa
* Fix contrib/pg_trgm's extraction of trigrams from regular expressions.Tom Lane2017-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The logic for removing excess trigrams from the result was faulty. It intends to avoid merging the initial and final states of the NFA, which is necessary, but in testing whether removal of a specific trigram would cause that, it failed to consider the combined effects of all the state merges that that trigram's removal would cause. This could result in a broken final graph that would never match anything, leading to GIN or GiST indexscans not finding anything. To fix, add a "tentParent" field that is used only within this loop, and set it to show state merges that we are tentatively going to do. While examining a particular arc, we must chase up through tentParent links as well as regular parent links (the former can only appear atop the latter), and we must account for state init/fin flag merges that haven't actually been done yet. To simplify the latter, combine the separate init and fin bool fields into a bitmap flags field. I also chose to get rid of the "children" state list, which seems entirely inessential. Per bug #14563 from Alexey Isayko, which the added test cases are based on. Back-patch to 9.3 where this code was added. Report: https://postgr.es/m/20170222111446.1256.67547@wrigleys.postgresql.org Discussion: https://postgr.es/m/8816.1487787594@sss.pgh.pa.us
* Fix typo also in expected output.Heikki Linnakangas2017-02-06
| | | | | Commit 181bdb90ba fixed the typo in the .sql file, but forgot to update the expected output.
* Fix typos in comments.Heikki Linnakangas2017-02-06
| | | | | | | | | Backpatch to all supported versions, where applicable, to make backpatching of future fixes go more smoothly. Josh Soref Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
* Invalidate cached plans on FDW option changes.Tom Lane2017-01-06
| | | | | | | | | | | | | | | | | | | | | | | | This fixes problems where a plan must change but fails to do so, as seen in a bug report from Rajkumar Raghuwanshi. For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of forcing a relcache flush on the table. For ALTER FOREIGN DATA WRAPPER and ALTER SERVER, just flush the whole plan cache on any change in pg_foreign_data_wrapper or pg_foreign_server. That matches the way we handle some other low-probability cases such as opclass changes, and it's unclear that the case arises often enough to be worth working harder. Besides, that gives a patch that is simple enough to back-patch with confidence. Back-patch to 9.3. In principle we could apply the code change to 9.2 as well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that anyone is doing anything exciting enough with FDWs that far back to need this desperately, and (c) the patch doesn't apply cleanly. Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh Bapat, who each contributed substantial changes as well. Discussion: https://postgr.es/m/CAKcux6m5cA6rRPTKkqVdJ-R=KKDfe35Q_ZuUqxDSV_4hwga=og@mail.gmail.com
* Make dblink try harder to form useful error messagesJoe Conway2016-12-22
| | | | | | | | | | | | | | | When libpq encounters a connection-level error, e.g. runs out of memory while forming a result, there will be no error associated with PGresult, but a message will be placed into PGconn's error buffer. postgres_fdw takes care to use the PGconn error message when PGresult does not have one, but dblink has been negligent in that regard. Modify dblink to mirror what postgres_fdw has been doing. Back-patch to all supported branches. Author: Joe Conway Reviewed-By: Tom Lane Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
* Protect dblink from invalid options when using postgres_fdw serverJoe Conway2016-12-22
| | | | | | | | | | | | | | | | | | | | When dblink uses a postgres_fdw server name for its connection, it is possible for the connection to have options that are invalid with dblink (e.g. "updatable"). The recommended way to avoid this problem is to use dblink_fdw servers instead. However there are use cases for using postgres_fdw, and possibly other FDWs, for dblink connection options, therefore protect against trying to use any options that do not apply by using is_valid_dblink_option() when building the connection string from the options. Back-patch to 9.3. Although 9.2 supports FDWs for connection info, is_valid_dblink_option() did not yet exist, and neither did postgres_fdw, at least in the postgres source tree. Given the lack of previous complaints, fixing that seems too invasive/not worth it. Author: Corey Huinker Reviewed-By: Joe Conway Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
* Give a useful error message if uuid-ossp is built without preconfiguration.Tom Lane2016-12-22
| | | | | | | | | | | | | | | | | | | | Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without having told configure you meant to; you could just cd into that directory and "make". That no longer works because the code depends on configure to have done header and library probes, but the ensuing error messages are not so easy to interpret if you're not an old C hand. We've gotten a couple of complaints recently from people trying to do this the low-tech way, so add an explicit #error directing the user to use --with-uuid. (In principle we might want to do something similar in the other optionally-built contrib modules; but I don't think any of the others have ever worked without preconfiguration, so there are no bad habits to break people of.) Back-patch to 9.4 where the previous commit came in. Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
* Improve dblink error message when remote does not provide itJoe Conway2016-12-21
| | | | | | | | | | | | | When dblink or postgres_fdw detects an error on the remote side of the connection, it will try to construct a local error message as best it can using libpq's PQresultErrorField(). When no primary message is available, it was bailing out with an unhelpful "unknown error". Make that message better and more style guide compliant. Per discussion on hackers. Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3. Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
* In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().Tom Lane2016-12-17
| | | | | | | | | | | | Oversight in commit b8cc8f947. I just noticed this causes compiler warnings on FreeBSD, and it really ought to cause warnings elsewhere too: all references I can find say that <arpa/inet.h> is required for these. We have a lot of code elsewhere that thinks that both <netinet/in.h> and <arpa/inet.h> should be included for these functions, so do it that way here too, even though <arpa/inet.h> ought to be sufficient according to the references I consulted. Back-patch to 9.4 where the previous commit landed.
* Make contrib/test_decoding regression tests safe for CZ locale.Tom Lane2016-11-21
| | | | | | | | A little COLLATE "C" goes a long way. Pavel Stehule, per suggestion from Craig Ringer Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
* Back-patch portability fixes for contrib/pageinspect/ginfuncs.c.Tom Lane2016-11-04
| | | | Back-patch commits 84ad68d64 and 367b99bbb.
* Fix portability bug in gin_page_opaque_info().Tom Lane2016-11-02
| | | | | | Somebody apparently thought that "if Int32GetDatum is good, Int64GetDatum must be better". Per buildfarm failures now that Peter has added some regression tests here.
* Correct logical decoding restore behaviour for subtransactions.Andres Freund2016-10-03
| | | | | | | | | | | | | | | | | | | Before initializing iteration over a subtransaction's changes, the last few changes were not spilled to disk. That's correct if the transaction didn't spill to disk, but otherwise... This bug can lead to missed or misorderd subtransaction contents when they were spilled to disk. Move spilling of the remaining in-memory changes to ReorderBufferIterTXNInit(), where it can easily be applied to the top transaction and, if present, subtransactions. Since this code had too many bugs already, noticeably increase test coverage. Fixes: #14319 Reported-By: Huan Ruan Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org> Backport: 9,4-, where logical decoding was added
* Fix building with LibreSSL.Heikki Linnakangas2016-09-15
| | | | | | | | | | | | | | | | LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0, but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf checks for the individual functions we need, and stop relying on OPENSSL_VERSION_NUMBER. Backport to 9.5 and 9.6, like the patch that broke this. In the back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left, to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL has all those functions, so they work as intended. Per buildfarm member curculio. Discussion: <2442.1473957669@sss.pgh.pa.us>
* pg_buffercache: Allow huge allocations.Robert Haas2016-09-15
| | | | | | | | | | | | Otherwise, users who have configured shared_buffers >= 256GB won't be able to use this module. There probably aren't many of those, but it doesn't hurt anything to fix it so that it works. Backpatch to 9.4, where MemoryContextAllocHuge was introduced. The same problem exists in older branches, but there's no easy way to fix it there. KaiGai Kohei
* Support OpenSSL 1.1.0.Heikki Linnakangas2016-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes needed to build at all: - Check for SSL_new in configure, now that SSL_library_init is a macro. - Do not access struct members directly. This includes some new code in pgcrypto, to use the resource owner mechanism to ensure that we don't leak OpenSSL handles, now that we can't embed them in other structs anymore. - RAND_SSLeay() -> RAND_OpenSSL() Changes that were needed to silence deprecation warnings, but were not strictly necessary: - RAND_pseudo_bytes() -> RAND_bytes(). - SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl() - ASN1_STRING_data() -> ASN1_STRING_get0_data() - DH_generate_parameters() -> DH_generate_parameters() - Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good riddance!) Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER, for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time immemorial. Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have the "CA:true" basic constraint extension now, or OpenSSL will refuse them. Regenerate the test certificates with that. The "openssl" binary, used to generate the certificates, is also now more picky, and throws an error if an X509 extension is specified in "req_extensions", but that section is empty. Backpatch to 9.5 and 9.6, per popular demand. The file structure was somewhat different in earlier branches, so I didn't bother to go further than that. In back-branches, we still support OpenSSL 0.9.7 and above. OpenSSL 0.9.6 should still work too, but I didn't test it. In master, we only support 0.9.8 and above. Patch by Andreas Karlsson, with additional changes by me. Discussion: <20160627151604.GD1051@msg.df7cb.de>
* Fix -e option in contrib/intarray/bench/bench.pl.Tom Lane2016-08-17
| | | | | | | | | | As implemented, -e ran an EXPLAIN but then discarded the output, which certainly seems pointless. Make it print to stdout instead. It's been like that forever, so back-patch to all supported branches. Daniel Gustafsson, reviewed by Andreas Scherbaum Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
* Fix typoPeter Eisentraut2016-08-09
|
* Don't propagate a null subtransaction snapshot up to parent transaction.Tom Lane2016-08-07
| | | | | | | | | This oversight could cause logical decoding to fail to decode an outer transaction containing changes, if a subtransaction had an XID but no actual changes. Per bug #14279 from Marko Tiikkaja. Patch by Marko based on analysis by Andrew Gierth. Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
* Fix assorted fallout from IS [NOT] NULL patch.Tom Lane2016-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 4452000f3 et al established semantics for NullTest.argisrow that are a bit different from its initial conception: rather than being merely a cache of whether we've determined the input to have composite type, the flag now has the further meaning that we should apply field-by-field testing as per the standard's definition of IS [NOT] NULL. If argisrow is false and yet the input has composite type, the construct instead has the semantics of IS [NOT] DISTINCT FROM NULL. Update the comments in primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print such cases correctly. In the case of ruleutils.c, this merely results in cosmetic changes in EXPLAIN output, since the case can't currently arise in stored rules. However, it represents a live bug for deparse.c, which would formerly have sent a remote query that had semantics different from the local behavior. (From the user's standpoint, this means that testing a remote nested-composite column for null-ness could have had unexpected recursive behavior much like that fixed in 4452000f3.) In a related but somewhat independent fix, make plancat.c set argisrow to false in all NullTest expressions constructed to represent "attnotnull" constructs. Since attnotnull is actually enforced as a simple null-value check, this is a more accurate representation of the semantics; we were previously overpromising what it meant for composite columns, which might possibly lead to incorrect planner optimizations. (It seems that what the SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so arguably we are violating the spec and should fix attnotnull to do the other thing. If we ever do, this part should get reverted.) Back-patch, same as the previous commit. Discussion: <10682.1469566308@sss.pgh.pa.us>
* Make contrib regression tests safe for Danish locale.Tom Lane2016-07-21
| | | | | | | | | | In btree_gin and citext, avoid some not-particularly-interesting dependencies on the sorting of 'aa'. In tsearch2, use COLLATE "C" to remove an uninteresting dependency on locale sort order (and thereby allow removal of a variant expected-file). Also, in citext, avoid assuming that lower('I') = 'i'. This isn't relevant to Danish but it does fail in Turkish.
* Use correct symbol for minimum int64 valuePeter Eisentraut2016-07-17
| | | | | | | The old code used SEQ_MINVALUE to get the smallest int64 value. This was done as a convenience to avoid having to deal with INT64_IS_BUSTED, but that is obsolete now. Also, it is incorrect because the smallest int64 value is actually SEQ_MINVALUE-1. Fix by using PG_INT64_MIN.
* Fix handling of multixacts predating pg_upgradeAlvaro Herrera2016-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After pg_upgrade, it is possible that some tuples' Xmax have multixacts corresponding to the old installation; such multixacts cannot have running members anymore. In many code sites we already know not to read them and clobber them silently, but at least when VACUUM tries to freeze a multixact or determine whether one needs freezing, there's an attempt to resolve it to its member transactions by calling GetMultiXactIdMembers, and if the multixact value is "in the future" with regards to the current valid multixact range, an error like this is raised: ERROR: MultiXactId 123 has not been created yet -- apparent wraparound and vacuuming fails. Per discussion with Andrew Gierth, it is completely bogus to try to resolve multixacts coming from before a pg_upgrade, regardless of where they stand with regards to the current valid multixact range. It's possible to get from under this problem by doing SELECT FOR UPDATE of the problem tuples, but if tables are large, this is slow and tedious, so a more thorough solution is desirable. To fix, we realize that multixacts in xmax created in 9.2 and previous have a specific bit pattern that is never used in 9.3 and later (we already knew this, per comments and infomask tests sprinkled in various places, but we weren't leveraging this knowledge appropriately). Whenever the infomask of the tuple matches that bit pattern, we just ignore the multixact completely as if Xmax wasn't set; or, in the case of tuple freezing, we act as if an unwanted value is set and clobber it without decoding. This guarantees that no errors will be raised, and that the values will be progressively removed until all tables are clean. Most callers of GetMultiXactIdMembers are patched to recognize directly that the value is a removable "empty" multixact and avoid calling GetMultiXactIdMembers altogether. To avoid changing the signature of GetMultiXactIdMembers() in back branches, we keep the "allow_old" boolean flag but rename it to "from_pgupgrade"; if the flag is true, we always return an empty set instead of looking up the multixact. (I suppose we could remove the argument in the master branch, but I chose not to do so in this commit). This was broken all along, but the error-facing message appeared first because of commit 8e9a16ab8f7f and was partially fixed in a25c2b7c4db3. This fix, backpatched all the way back to 9.3, goes approximately in the same direction as a25c2b7c4db3 but should cover all cases. Bug analysis by Andrew Gierth and Álvaro Herrera. A number of public reports match this bug: https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
* Properly initialize SortSupport for ORDER BY rechecks in nodeIndexscan.c.Tom Lane2016-06-05
| | | | | | | | | | | | | | | | | | Fix still another bug in commit 35fcb1b3d: it failed to fully initialize the SortSupport states it introduced to allow the executor to re-check ORDER BY expressions containing distance operators. That led to a null pointer dereference if the sortsupport code tried to use ssup_cxt. The problem only manifests in narrow cases, explaining the lack of previous field reports. It requires a GiST-indexable distance operator that lacks SortSupport and is on a pass-by-ref data type, which among core+contrib seems to be only btree_gist's interval opclass; and it requires the scan to be done as an IndexScan not an IndexOnlyScan, which explains how btree_gist's regression test didn't catch it. Per bug #14134 from Jihyun Yu. Peter Geoghegan Report: <20160511154904.2603.43889@wrigleys.postgresql.org>
* Ensure plan stability in contrib/btree_gist regression test.Tom Lane2016-05-12
| | | | | | | | Buildfarm member skink failed with symptoms suggesting that an auto-analyze had happened and changed the plan displayed for a test query. Although this is evidently of low probability, regression tests that sometimes fail are no fun, so add commands to force a bitmap scan to be chosen.
* Remove unused macros.Heikki Linnakangas2016-05-02
| | | | | | | | | CHECK_PAGE_OFFSET_RANGE() has been unused forever. CHECK_RELATION_BLOCK_RANGE() has been unused in pgstatindex.c ever since bt_page_stats() and bt_page_items() functions were moved from pgstattuple to pageinspect module. It still exists in pageinspect/btreefuncs.c. Daniel Gustafsson
* Revert "Convert contrib/seg's bool-returning SQL functions to V1 call ↵Tom Lane2016-04-28
| | | | | | | | convention." This reverts commit b1dd2f86ce7d43f23f6aae307bb22de826849e7d. That turns out to have been based on a faulty diagnosis of why the VS2015 build was misbehaving. Instead, we need to fix DatumGetBool().
* Convert contrib/seg's bool-returning SQL functions to V1 call convention.Tom Lane2016-04-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It appears that we can no longer get away with using V0 call convention for bool-returning functions in newer versions of MSVC. The compiler seems to generate code that doesn't clear the higher-order bits of the result register, causing the bool result Datum to often read as "true" when "false" was intended. This is not very surprising, since the function thinks it's returning a bool-width result but fmgr_oldstyle assumes that V0 functions return "char *"; what's surprising is that that hack worked for so long on so many platforms. The only functions of this description in core+contrib are in contrib/seg, which we'd intentionally left mostly in V0 style to serve as a warning canary if V0 call convention breaks. We could imagine hacking things so that they're still V0 (we'd have to redeclare the bool-returning functions as returning some suitably wide integer type, like size_t, at the C level). But on the whole it seems better to convert 'em to V1. We can still leave the pointer- and int-returning functions in V0 style, so that the test coverage isn't gone entirely. Back-patch to 9.5, since our intention is to support VS2015 in 9.5 and later. There's no SQL-level change in the functions' behavior so back-patching should be safe enough. Discussion: <22094.1461273324@sss.pgh.pa.us> Michael Paquier, adjusted some by me
* Fix core dump in ReorderBufferRestoreChange on alignment-picky platforms.Tom Lane2016-04-14
| | | | | | | | | | | | | | | | | | | | | | | | When re-reading an update involving both an old tuple and a new tuple from disk, reorderbuffer.c was careless about whether the new tuple is suitably aligned for direct access --- in general, it isn't. We'd missed seeing this in the buildfarm because the contrib/test_decoding tests exercise this code path only a few times, and by chance all of those cases have old tuples with length a multiple of 4, which is usually enough to make the access to the new tuple's t_len safe. For some still-not-entirely-clear reason, however, Debian's sparc build gets a bus error, as reported by Christoph Berg; perhaps it's assuming 8-byte alignment of the pointer? The lack of previous field reports is probably because you need all of these conditions to trigger a crash: an alignment-picky platform (not Intel), a transaction large enough to spill to disk, an update within that xact that changes a primary-key field and has an odd-length old tuple, and of course logical decoding tracing the transaction. Avoid the alignment assumption by using memcpy instead of fetching t_len directly, and add a test case that exposes the crash on picky platforms. Back-patch to 9.4 where the bug was introduced. Discussion: <20160413094117.GC21485@msg.credativ.de>
* Add missing checks to some of pageinspect's BRIN functionsAlvaro Herrera2016-03-28
| | | | | | | | | | | | | | brin_page_type() and brin_metapage_info() did not enforce being called by superuser, like other pageinspect functions that take bytea do. Since they don't verify the passed page thoroughly, it is possible to use them to read the server memory with a carefully crafted bytea value, up to a file kilobytes from where the input bytea is located. Have them throw errors if called by a non-superuser. Report and initial patch: Andreas Seltenreich Security: CVE-2016-3065
* Fix phony .PHONY.Tom Lane2016-03-19
| | | | A couple makefiles had misspelled the magic .PHONY target as PHONY.
* Avoid unlikely data-loss scenarios due to rename() without fsync.Andres Freund2016-03-09
| | | | | | | | | | | | | | | | | | | | | Renaming a file using rename(2) is not guaranteed to be durable in face of crashes. Use the previously added durable_rename()/durable_link_or_rename() in various places where we previously just renamed files. Most of the changed call sites are arguably not critical, but it seems better to err on the side of too much durability. The most prominent known case where the previously missing fsyncs could cause data loss is crashes at the end of a checkpoint. After the actual checkpoint has been performed, old WAL files are recycled. When they're filled, their contents are fdatasynced, but we did not fsync the containing directory. An OS/hardware crash in an unfortunate moment could then end up leaving that file with its old name, but new content; WAL replay would thus not replay it. Reported-By: Tomas Vondra Author: Michael Paquier, Tomas Vondra, Andres Freund Discussion: 56583BDD.9060302@2ndquadrant.com Backpatch: All supported branches
* ltree: Zero padding bytes when allocating memory for externally visible data.Andres Freund2016-03-08
| | | | | | | | | | ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment, requiring some padding bytes. So far we left these uninitialized. Zero those by using palloc0. Author: Andres Freund Reported-By: Andres Freund / valgrind / buildarm animal skink Backpatch: 9.1-
* logical decoding: Fix handling of large old tuples with replica identity full.Andres Freund2016-03-05
| | | | | | | | | | | | | | | | | | | | | | | | | When decoding the old version of an UPDATE or DELETE change, and if that tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or failed in more subtle ways in non-assert builds. Normally individual tuples aren't bigger than MaxHeapTupleSize, with big datums toasted. But that's not the case for the old version of a tuple for logical decoding; the replica identity is logged as one piece. With the default replica identity btree limits that to small tuples, but that's not the case for FULL. Change the tuple buffer infrastructure to separate allocate over-large tuples, instead of always going through the slab cache. This unfortunately requires changing the ReorderBufferTupleBuf definition, we need to store the allocated size someplace. To avoid requiring output plugins to recompile, don't store HeapTupleHeaderData directly after HeapTupleData, but point to it via t_data; that leaves rooms for the allocated size. As there's no reason for an output plugin to look at ReorderBufferTupleBuf->t_data.header, remove the field. It was just a minor convenience having it directly accessible. Reported-By: Adam Dratwiński Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
* logical decoding: old/newtuple in spooled UPDATE changes was switched around.Andres Freund2016-03-05
| | | | | | | | | | | | | | | Somehow I managed to flip the order of restoring old & new tuples when de-spooling a change in a large transaction from disk. This happens to only take effect when a change is spooled to disk which has old/new versions of the tuple. That only is the case for UPDATEs where he primary key changed or where replica identity is changed to FULL. The tests didn't catch this because either spooled updates, or updates that changed primary keys, were tested; not both at the same time. Found while adding tests for the following commit. Backpatch: 9.4, where logical decoding was added
* logical decoding: Tell reorderbuffer about all xids.Andres Freund2016-03-05
| | | | | | | | | | | | | | | | | | | | | | | Logical decoding's reorderbuffer keeps transactions in an LSN ordered list for efficiency. To make that's efficiently possible upper-level xids are forced to be logged before nested subtransaction xids. That only works though if these records are all looked at: Unfortunately we didn't do so for e.g. row level locks, which are otherwise uninteresting for logical decoding. This could lead to errors like: "ERROR: subxact logged without previous toplevel record". It's not sufficient to just look at row locking records, the xid could appear first due to a lot of other types of records (which will trigger the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny). So invent infrastructure to tell reorderbuffer about xids seen, when they'd otherwise not pass through reorderbuffer.c. Reported-By: Jarred Ward Bug: #13844 Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org Backpatch: 9.4, where logical decoding was added
* Force synchronous_commit=on in test_decoding's concurrent_ddl_dml.spec.Andres Freund2016-03-03
| | | | | | | | Otherwise running installcheck-force on a server with synchronous_commit=off will result in the tests failing. All the other tests already do so... Backpatch: 9.4, where logical decoding was added
* logical decoding: fix decoding of a commit's commit time.Andres Freund2016-03-02
| | | | | | | | | | | | | | | | | | When adding replication origins in 5aa235042, I somehow managed to set the timestamp of decoded transactions to InvalidXLogRecptr when decoding one made without a replication origin. Fix that, and the wrong type of the new commit_time variable. This didn't trigger a regression test failure because we explicitly don't show commit timestamps in the regression tests, as they obviously are variable. Add a test that checks that a decoded commit's timestamp is within minutes of NOW() from before the commit. Reported-By: Weiping Qu Diagnosed-By: Artur Zakirov Discussion: 56D4197E.9050706@informatik.uni-kl.de, 56D42918.1010108@postgrespro.ru Backpatch: 9.5, where 5aa235042 originates.
* Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.Tom Lane2016-02-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dead or half-dead index leaf pages were incorrectly reported as live, as a consequence of a code rearrangement I made (during a moment of severe brain fade, evidently) in commit d287818eb514d431. The index metapage was not counted in index_size, causing that result to not agree with the actual index size on-disk. Index root pages were not counted in internal_pages, which is inconsistent compared to the case of a root that's also a leaf (one-page index), where the root would be counted in leaf_pages. Aside from that inconsistency, this could lead to additional transient discrepancies between the reported page counts and index_size, since it's possible for pgstatindex's scan to see zero or multiple pages marked as BTP_ROOT, if the root moves due to a split during the scan. With these fixes, index_size will always be exactly one page more than the sum of the displayed page counts. Also, the index_size result was incorrectly documented as being measured in pages; it's always been measured in bytes. (While fixing that, I couldn't resist doing some small additional wordsmithing on the pgstattuple docs.) Including the metapage causes the reported index_size to not be zero for an empty index. To preserve the desired property that the pgstattuple regression test results are platform-independent (ie, BLCKSZ configuration independent), scale the index_size result in the regression tests. The documentation issue was reported by Otsuka Kenji, and the inconsistent root page counting by Peter Geoghegan; the other problems noted by me. Back-patch to all supported branches, because this has been broken for a long time.
* postgres_fdw: Avoid possible misbehavior when RETURNING tableoid column only.Robert Haas2016-02-04
| | | | | | | | deparseReturningList ended up adding up RETURNING NULL to the code, but code elsewhere saw an empty list of attributes and concluded that it should not expect tuples from the remote side. Etsuro Fujita and Robert Haas, reviewed by Thom Brown
* Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage.Tom Lane2016-02-03
| | | | | | | | | | | Commit e09996ff8dee3f70 was one brick shy of a load: it didn't insist that the detected JSON number be the whole of the supplied string. This allowed inputs such as "2016-01-01" to be misdetected as valid JSON numbers. Per bug #13906 from Dmitry Ryabov. In passing, be more wary of zero-length input (I'm not sure this can happen given current callers, but better safe than sorry), and do some minor cosmetic cleanup.
* Fix incorrect pattern-match processing in psql's \det command.Tom Lane2016-01-29
| | | | | | | | | | | | | | listForeignTables' invocation of processSQLNamePattern did not match up with the other ones that handle potentially-schema-qualified names; it failed to make use of pg_table_is_visible() and also passed the name arguments in the wrong order. Bug seems to have been aboriginal in commit 0d692a0dc9f0e532. It accidentally sort of worked as long as you didn't inquire too closely into the behavior, although the silliness was later exposed by inconsistencies in the test queries added by 59efda3e50ca4de6 (which I probably should have questioned at the time, but didn't). Per bug #13899 from Reece Hart. Patch by Reece Hart and Tom Lane. Back-patch to all affected branches.
* Remove new coupling between NAMEDATALEN and MAX_LEVENSHTEIN_STRLEN.Tom Lane2016-01-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit e529cd4ffa605c6f introduced an Assert requiring NAMEDATALEN to be less than MAX_LEVENSHTEIN_STRLEN, which has been 255 for a long time. Since up to that instant we had always allowed NAMEDATALEN to be substantially more than that, this was ill-advised. It's debatable whether we need MAX_LEVENSHTEIN_STRLEN at all (versus putting a CHECK_FOR_INTERRUPTS into the loop), or whether it has to be so tight; but this patch takes the narrower approach of just not applying the MAX_LEVENSHTEIN_STRLEN limit to calls from the parser. Trusting the parser for this seems reasonable, first because the strings are limited to NAMEDATALEN which is unlikely to be hugely more than 256, and second because the maximum distance is tightly constrained by MAX_FUZZY_DISTANCE (though we'd forgotten to make use of that limit in one place). That means the cost is not really O(mn) but more like O(max(m,n)). Relaxing the limit for user-supplied calls is left for future research; given the lack of complaints to date, it doesn't seem very high priority. In passing, fix confusion between lengths-in-bytes and lengths-in-chars in comments and error messages. Per gripe from Kevin Day; solution suggested by Robert Haas. Back-patch to 9.5 where the unwanted restriction was introduced.
* Use LOAD not actual code execution to pull in plpython library.Tom Lane2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | Commit 866566a690bb9916 is insufficient to prevent dump/reload failures when using transform modules in a database with both plpython2 and plpython3 installed. The reason is that the transform extension scripts use DO blocks as a mechanism to pull in the libpython library before creating the transform function. It's necessary to preload the library because the dynamic loader won't do it for us on every platform, leading to "unresolved symbol" failures when the transform library is loaded. But it's *not* necessary to execute Python code, and doing so will provoke a multiple-Pythons-are-loaded error even after the preceding commit. To fix, use LOAD instead of a DO block. That requires superuser privilege, but creation of a C function does anyway. It also embeds knowledge of the underlying library name for each PL language; but that's wired into the initdb-time contents of pg_pltemplate too, so that doesn't seem like a large problem either. Note that CREATE TRANSFORM as such doesn't call the language module at all. Per a report from Paul Jones. Back-patch to 9.5 where transform modules were introduced.
* Sort $(wildcard) output where needed for reproducible build output.Tom Lane2016-01-05
| | | | | | | | The order of inclusion of .o files makes a difference in linker output; not a functional difference, but still a bitwise difference, which annoys some packagers who would like reproducible builds. Report and patch by Christoph Berg
* Add a comment noting that FDWs don't have to implement EXCEPT or LIMIT TO.Tom Lane2015-12-31
| | | | | | | | postgresImportForeignSchema pays attention to IMPORT's EXCEPT and LIMIT TO options, but only as an efficiency hack, not for correctness' sake. The FDW documentation does explain that, but someone using postgres_fdw.c as a coding guide might not remember it, so let's add a comment here. Per question from Regina Obe.
* Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()Alvaro Herrera2015-12-27
| | | | | | | | | | | Both Blowfish and DES implementations of crypt() can take arbitrarily long time, depending on the number of rounds specified by the caller; make sure they can be interrupted. Author: Andreas Karlsson Reviewer: Jeff Janes Backpatch to 9.1.
* Allow foreign and custom joins to handle EvalPlanQual rechecks.Robert Haas2015-12-08
| | | | | | | | | | | | | | | | | | | | | | | Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 provided basic infrastructure for allowing a foreign data wrapper or custom scan provider to replace a join of one or more tables with a scan. However, this infrastructure failed to take into account the need for possible EvalPlanQual rechecks, and ExecScanFetch would fail an assertion (or just overwrite memory) if such a check was attempted for a plan containing a pushed-down join. To fix, adjust the EPQ machinery to skip some processing steps when scanrelid == 0, making those the responsibility of scan's recheck method, which also has the responsibility in this case of correctly populating the relevant slot. To allow foreign scans to gain control in the right place to make use of this new facility, add a new, optional RecheckForeignScan method. Also, allow a foreign scan to have a child plan, which can be used to correctly populate the slot (or perhaps for something else, but this is the only use currently envisioned). KaiGai Kohei, reviewed by Robert Haas, Etsuro Fujita, and Kyotaro Horiguchi.