aboutsummaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAge
* postgres_fdw: Avoid possible misbehavior when RETURNING tableoid column only.Robert Haas2016-02-04
| | | | | | | | deparseReturningList ended up adding up RETURNING NULL to the code, but code elsewhere saw an empty list of attributes and concluded that it should not expect tuples from the remote side. Etsuro Fujita and Robert Haas, reviewed by Thom Brown
* Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage.Tom Lane2016-02-03
| | | | | | | | | | | Commit e09996ff8dee3f70 was one brick shy of a load: it didn't insist that the detected JSON number be the whole of the supplied string. This allowed inputs such as "2016-01-01" to be misdetected as valid JSON numbers. Per bug #13906 from Dmitry Ryabov. In passing, be more wary of zero-length input (I'm not sure this can happen given current callers, but better safe than sorry), and do some minor cosmetic cleanup.
* Fix incorrect pattern-match processing in psql's \det command.Tom Lane2016-01-29
| | | | | | | | | | | | | | listForeignTables' invocation of processSQLNamePattern did not match up with the other ones that handle potentially-schema-qualified names; it failed to make use of pg_table_is_visible() and also passed the name arguments in the wrong order. Bug seems to have been aboriginal in commit 0d692a0dc9f0e532. It accidentally sort of worked as long as you didn't inquire too closely into the behavior, although the silliness was later exposed by inconsistencies in the test queries added by 59efda3e50ca4de6 (which I probably should have questioned at the time, but didn't). Per bug #13899 from Reece Hart. Patch by Reece Hart and Tom Lane. Back-patch to all affected branches.
* Remove new coupling between NAMEDATALEN and MAX_LEVENSHTEIN_STRLEN.Tom Lane2016-01-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit e529cd4ffa605c6f introduced an Assert requiring NAMEDATALEN to be less than MAX_LEVENSHTEIN_STRLEN, which has been 255 for a long time. Since up to that instant we had always allowed NAMEDATALEN to be substantially more than that, this was ill-advised. It's debatable whether we need MAX_LEVENSHTEIN_STRLEN at all (versus putting a CHECK_FOR_INTERRUPTS into the loop), or whether it has to be so tight; but this patch takes the narrower approach of just not applying the MAX_LEVENSHTEIN_STRLEN limit to calls from the parser. Trusting the parser for this seems reasonable, first because the strings are limited to NAMEDATALEN which is unlikely to be hugely more than 256, and second because the maximum distance is tightly constrained by MAX_FUZZY_DISTANCE (though we'd forgotten to make use of that limit in one place). That means the cost is not really O(mn) but more like O(max(m,n)). Relaxing the limit for user-supplied calls is left for future research; given the lack of complaints to date, it doesn't seem very high priority. In passing, fix confusion between lengths-in-bytes and lengths-in-chars in comments and error messages. Per gripe from Kevin Day; solution suggested by Robert Haas. Back-patch to 9.5 where the unwanted restriction was introduced.
* Use LOAD not actual code execution to pull in plpython library.Tom Lane2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | Commit 866566a690bb9916 is insufficient to prevent dump/reload failures when using transform modules in a database with both plpython2 and plpython3 installed. The reason is that the transform extension scripts use DO blocks as a mechanism to pull in the libpython library before creating the transform function. It's necessary to preload the library because the dynamic loader won't do it for us on every platform, leading to "unresolved symbol" failures when the transform library is loaded. But it's *not* necessary to execute Python code, and doing so will provoke a multiple-Pythons-are-loaded error even after the preceding commit. To fix, use LOAD instead of a DO block. That requires superuser privilege, but creation of a C function does anyway. It also embeds knowledge of the underlying library name for each PL language; but that's wired into the initdb-time contents of pg_pltemplate too, so that doesn't seem like a large problem either. Note that CREATE TRANSFORM as such doesn't call the language module at all. Per a report from Paul Jones. Back-patch to 9.5 where transform modules were introduced.
* Sort $(wildcard) output where needed for reproducible build output.Tom Lane2016-01-05
| | | | | | | | The order of inclusion of .o files makes a difference in linker output; not a functional difference, but still a bitwise difference, which annoys some packagers who would like reproducible builds. Report and patch by Christoph Berg
* Add a comment noting that FDWs don't have to implement EXCEPT or LIMIT TO.Tom Lane2015-12-31
| | | | | | | | postgresImportForeignSchema pays attention to IMPORT's EXCEPT and LIMIT TO options, but only as an efficiency hack, not for correctness' sake. The FDW documentation does explain that, but someone using postgres_fdw.c as a coding guide might not remember it, so let's add a comment here. Per question from Regina Obe.
* Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()Alvaro Herrera2015-12-27
| | | | | | | | | | | Both Blowfish and DES implementations of crypt() can take arbitrarily long time, depending on the number of rounds specified by the caller; make sure they can be interrupted. Author: Andreas Karlsson Reviewer: Jeff Janes Backpatch to 9.1.
* Allow foreign and custom joins to handle EvalPlanQual rechecks.Robert Haas2015-12-08
| | | | | | | | | | | | | | | | | | | | | | | Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 provided basic infrastructure for allowing a foreign data wrapper or custom scan provider to replace a join of one or more tables with a scan. However, this infrastructure failed to take into account the need for possible EvalPlanQual rechecks, and ExecScanFetch would fail an assertion (or just overwrite memory) if such a check was attempted for a plan containing a pushed-down join. To fix, adjust the EPQ machinery to skip some processing steps when scanrelid == 0, making those the responsibility of scan's recheck method, which also has the responsibility in this case of correctly populating the relevant slot. To allow foreign scans to gain control in the right place to make use of this new facility, add a new, optional RecheckForeignScan method. Also, allow a foreign scan to have a child plan, which can be used to correctly populate the slot (or perhaps for something else, but this is the only use currently envisioned). KaiGai Kohei, reviewed by Robert Haas, Etsuro Fujita, and Kyotaro Horiguchi.
* Dodge a macro-name conflict with Perl.Tom Lane2015-11-19
| | | | | | | | | | | | | Some versions of Perl export a macro named HS_KEY. This creates a conflict in contrib/hstore_plperl against hstore's macro of the same name. The most future-proof solution seems to be to rename our macro; I chose HSTORE_KEY. For consistency, rename HS_VAL and related macros similarly. Back-patch to 9.5. contrib/hstore_plperl doesn't exist before that so there is no need to worry about the conflict in older releases. Per reports from Marco Atzeri and Mike Blackwell.
* Message style improvementsPeter Eisentraut2015-10-28
| | | | | Message style, plurals, quoting, spelling, consistency with similar messages
* Allow FDWs to push down quals without breaking EvalPlanQual rechecks.Robert Haas2015-10-15
| | | | | | | | | | | | | | | | | This fixes a long-standing bug which was discovered while investigating the interaction between the new join pushdown code and the EvalPlanQual machinery: if a ForeignScan appears on the inner side of a paramaterized nestloop, an EPQ recheck would re-return the original tuple even if it no longer satisfied the pushed-down quals due to changed parameter values. This fix adds a new member to ForeignScan and ForeignScanState and a new argument to make_foreignscan, and requires changes to FDWs which push down quals to populate that new argument with a list of quals they have chosen to push down. Therefore, I'm only back-patching to 9.5, even though the bug is not new in 9.5. Etsuro Fujita, reviewed by me and by Kyotaro Horiguchi.
* Prevent stack overflow in query-type functions.Noah Misch2015-10-05
| | | | | | The tsquery, ltxtquery and query_int data types have a common ancestor. Having acquired check_stack_depth() calls independently, each was missing at least one call. Back-patch to 9.0 (all supported versions).
* pgcrypto: Detect and report too-short crypt() salts.Noah Misch2015-10-05
| | | | | | | | | | Certain short salts crashed the backend or disclosed a few bytes of backend memory. For existing salt-induced error conditions, emit a message saying as much. Back-patch to 9.0 (all supported versions). Josh Kupershmidt Security: CVE-2015-5288
* Improve contrib/pg_stat_statements' handling of garbage collection failure.Tom Lane2015-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we can't read the query texts file (whether because out-of-memory, or for some other reason), give up and reset the file to empty, discarding all stored query texts, though not the statistics per se. We used to leave things alone and hope for better luck next time, but the problem is that the file is only going to get bigger and even harder to slurp into memory. Better to do something that will get us out of trouble. Likewise reset the file to empty for any other failure within gc_qtexts(). The previous behavior after a write error was to discard query texts but not do anything to truncate the file, which is just weird. Also, increase the maximum supported file size from MaxAllocSize to MaxAllocHugeSize; this makes it more likely we'll be able to do a garbage collection successfully. Also, fix recalculation of mean_query_len within entry_dealloc() to match the calculation in gc_qtexts(). The previous coding overlooked the possibility of dropped texts (query_len == -1) and would underestimate the mean of the remaining entries in such cases, thus possibly causing excess garbage collection cycles. In passing, add some errdetail to the log entry that complains about insufficient memory to read the query texts file, which after all was Jim Nasby's original complaint. Back-patch to 9.4 where the current handling of query texts was introduced. Peter Geoghegan, rather editorialized upon by me
* Improve errhint() about replication slot naming restrictions.Andres Freund2015-10-03
| | | | | | | | | | The existing hint talked about "may only contain letters", but the actual requirement is more strict: only lower case letters are allowed. Reported-By: Rushabh Lathia Author: Rushabh Lathia Discussion: AGPqQf2x50qcwbYOBKzb4x75sO_V3g81ZsA8+Ji9iN5t_khFhQ@mail.gmail.com Backpatch: 9.4-, where replication slots were added
* Improve handling of collations in contrib/postgres_fdw.Tom Lane2015-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have a local Var of say varchar type with default collation, and we apply a RelabelType to convert that to text with default collation, we don't want to consider that as creating an FDW_COLLATE_UNSAFE situation. It should be okay to compare that to a remote Var, so long as the remote Var determines the comparison collation. (When we actually ship such an expression to the remote side, the local Var would become a Param with default collation, meaning the remote Var would in fact control the comparison collation, because non-default implicit collation overrides default implicit collation in parse_collate.c.) To fix, be more precise about what FDW_COLLATE_NONE means: it applies either to a noncollatable data type or to a collatable type with default collation, if that collation can't be traced to a remote Var. (When it can, FDW_COLLATE_SAFE is appropriate.) We were essentially using that interpretation already at the Var/Const/Param level, but we weren't bubbling it up properly. An alternative fix would be to introduce a separate FDW_COLLATE_DEFAULT value to describe the second situation, but that would add more code without changing the actual behavior, so it didn't seem worthwhile. Also, since we're clarifying the rule to be that we care about whether operator/function input collations match, there seems no need to fail immediately upon seeing a Const/Param/non-foreign-Var with nondefault collation. We only have to reject if it appears in a collation-sensitive context (for example, "var IS NOT NULL" is perfectly safe from a collation standpoint, whatever collation the var has). So just set the state to UNSAFE rather than failing immediately. Per report from Jeevan Chalke. This essentially corrects some sloppy thinking in commit ed3ddf918b59545583a4b374566bc1148e75f593, so back-patch to 9.3 where that logic appeared.
* test_decoding: Protect against rare spurious test failures.Andres Freund2015-09-22
| | | | | | | | | | A bunch of tests missed specifying that empty transactions shouldn't be displayed. That causes problems when e.g. autovacuum runs in an unfortunate moment. The tests in question only run for a very short time, making this quite unlikely. Reported-By: Buildfarm member axolotl Backpatch: 9.4, where logical decoding was introduced
* Fix error message wording in previous sslinfo commitAlvaro Herrera2015-09-08
|
* Add more sanity checks in contrib/sslinfoAlvaro Herrera2015-09-07
| | | | | | | | | We were missing a few return checks on OpenSSL calls. Should be pretty harmless, since we haven't seen any user reports about problems, and this is not a high-traffic module anyway; still, a bug is a bug, so backpatch this all the way back to 9.0. Author: Michael Paquier, while reviewing another sslinfo patch
* Fix misc typos.Heikki Linnakangas2015-09-05
| | | | Oskari Saarenmaa. Backpatch to stable branches where applicable.
* Fix sepgsql regression tests.Joe Conway2015-08-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The regression tests for sepgsql were broken by changes in the base distro as-shipped policies. Specifically, definition of unconfined_t in the system default policy was changed to bypass multi-category rules, which the regression test depended on. Fix that by defining a custom privileged domain (sepgsql_regtest_superuser_t) and using it instead of system's unconfined_t domain. The new sepgsql_regtest_superuser_t domain performs almost like the current unconfined_t, but restricted by multi-category policy as the traditional unconfined_t was. The custom policy module is a self defined domain, and so should not be affected by related future system policy changes. However, it still uses the unconfined_u:unconfined_r pair for selinux-user and role. Those definitions have not been changed for several years and seem less risky to rely on than the unconfined_t domain. Additionally, if we define custom user/role, they would need to be manually defined at the operating system level, adding more complexity to an already non-standard and complex regression test. Back-patch to 9.3. The regression tests will need more work before working correctly on 9.2. Starting with 9.2, sepgsql has had dependencies on libselinux versions that are only available on newer distros with the changed set of policies (e.g. RHEL 7.x). On 9.1 sepgsql works fine with the older distros with original policy set (e.g. RHEL 6.x), and on which the existing regression tests work fine. We might want eventually change 9.1 sepgsql regression tests to be more independent from the underlying OS policies, however more work will be needed to make that happen and it is not clear that it is worth the effort. Kohei KaiGai with review by Adam Brightwell and me, commentary by Stephen, Alvaro, Tom, Robert, and others.
* Improve spellingPeter Eisentraut2015-08-22
|
* Use materialize SRF mode in brin_page_itemsAlvaro Herrera2015-08-13
| | | | | | | | | This function was using the single-value-per-call mechanism, but the code relied on a relcache entry that wasn't kept open across calls. This manifested as weird errors in buildfarm during the short time that the "brin-1" isolation test lived. Backpatch to 9.5, where it was introduced.
* contrib/isn now needs a .gitignore file.Tom Lane2015-08-02
| | | | | Oversight in commit cb3384a0cb4cf900622b77865f60e31259923079. Back-patch to 9.1, like that commit.
* Fix a number of places that produced XX000 errors in the regression tests.Tom Lane2015-08-02
| | | | | | | | | | | | | | | | | | | | It's against project policy to use elog() for user-facing errors, or to omit an errcode() selection for errors that aren't supposed to be "can't happen" cases. Fix all the violations of this policy that result in ERRCODE_INTERNAL_ERROR log entries during the standard regression tests, as errors that can reliably be triggered from SQL surely should be considered user-facing. I also looked through all the files touched by this commit and fixed other nearby problems of the same ilk. I do not claim to have fixed all violations of the policy, just the ones in these files. In a few places I also changed existing ERRCODE choices that didn't seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR by something more specific. Back-patch to 9.5, but no further; changing ERRCODE assignments in stable branches doesn't seem like a good idea.
* Fix output of ISBN-13 numbers beginning with 979.Heikki Linnakangas2015-08-02
| | | | | | | | | | | | | An EAN beginning with 979 (but not 9790 - those are ISMN's) are accepted as ISBN numbers, but they cannot be represented in the old, 10-digit ISBN format. They must be output in the new 13-digit ISBN-13 format. We printed out an incorrect value for those. Also add a regression test, to test this and some other basic functionality of the module. Patch by Fabien Coelho. This fixes bug #13442, reported by B.Z. Backpatch to 9.1, where we started to recognize ISBN-13 numbers.
* Some platforms now need contrib/tsm_system_time to be linked with libm.Tom Lane2015-07-25
| | | | | Buildfarm member hornet, at least, seems to want -lm in the link command. Probably this is due to the just-added use of isnan().
* Redesign tablesample method API, and do extensive code review.Tom Lane2015-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
* Enable transforms modules to build and test on Cygwin.Andrew Dunstan2015-07-18
| | | | | This still doesn't work correctly with Python 3, but I am committing this so we can get Cygwin buildfarm members building with Python 2.
* AIX: Link TRANSFORM modules with their dependencies.Noah Misch2015-07-15
| | | | | | | | The result closely resembles linking of these modules for the "win32" port. Augment the $(exports_file) header so the file is also usable as an import file. Unfortunately, relocating an AIX installation will now require adding $(pkglibdir) to LD_LIBRARY_PATH. Back-patch to 9.5, where the modules were introduced.
* MinGW: Link ltree_plpython with plpython.Noah Misch2015-07-15
| | | | | The MSVC build system already did this, and building against Python 3 requires it. Back-patch to 9.5, where the module was introduced.
* Prevent pgstattuple() from reporting BRIN as unknown index.Fujii Masao2015-07-14
| | | | | | Also this patch removes obsolete comment. Back-patch to 9.5 where BRIN index was added.
* Link pg_stat_statements with libm.Noah Misch2015-07-08
| | | | | | The AIX 7.1 libm is static, and AIX postgres executables do not export symbols acquired from libraries. Back-patch to 9.5, where commit cfe12763c32437bc708a64ce88a90c7544f16185 added a sqrt() call.
* Use appendStringInfoString/Char et al where appropriate.Heikki Linnakangas2015-07-02
| | | | | | Patch by David Rowley. Backpatch to 9.5, as some of the calls were new in 9.5, and keeping the code in sync with master makes future backpatching easier.
* Make use of xlog_internal.h's macros in WAL-related utilities.Fujii Masao2015-07-02
| | | | | | | | | | | | | | Commit 179cdd09 added macros to check if a filename is a WAL segment or other such file. However there were still some instances of the strlen + strspn combination to check for that in WAL-related utilities like pg_archivecleanup. Those checks can be replaced with the macros. This patch makes use of the macros in those utilities and which would make the code a bit easier to read. Back-patch to 9.5. Michael Paquier
* Fix test_decoding's handling of nonexistant columns in old tuple versions.Andres Freund2015-06-27
| | | | | | | | | | | | | | | test_decoding used fastgetattr() to extract column values. That's wrong when decoding updates and deletes if a table's replica identity is set to FULL and new columns have been added since the old version of the tuple was created. Due to the lack of a crosscheck with the datum's natts values an invalid value will be output, leading to errors or worse. Bug: #13470 Reported-By: Krzysztof Kotlarski Discussion: 20150626100333.3874.90852@wrigleys.postgresql.org Backpatch to 9.4, where the feature, including the bug, was added.
* Make Python tests more portablePeter Eisentraut2015-05-31
| | | | | | | | | Newer Python versions randomize the hash seed for dictionaries, resulting in a random output order, which messes up the regression test diffs. Instead, use Python assert to compare the dictionaries with their expected value.
* Finish removing pg_auditStephen Frost2015-05-28
|
* Remove pg_auditStephen Frost2015-05-28
| | | | | | This removes pg_audit, per discussion: 20150528082038.GU26667@tamriel.snowman.net
* Manual cleanup of pgindent results.Tom Lane2015-05-24
| | | | | | Fix some places where pgindent did silly stuff, often because project style wasn't followed to begin with. (I've not touched the atomics headers, though.)
* Remove no-longer-required function declarations.Tom Lane2015-05-24
| | | | | | | | | | Remove a bunch of "extern Datum foo(PG_FUNCTION_ARGS);" declarations that are no longer needed now that PG_FUNCTION_INFO_V1(foo) provides that. Some of these were evidently missed in commit e7128e8dbb305059, but others were cargo-culted in in code added since then. Possibly that can be blamed in part on the fact that we'd not fixed relevant documentation examples, which I've now done.
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* Collection of typo fixes.Heikki Linnakangas2015-05-20
| | | | | | | | | | | | | | | Use "a" and "an" correctly, mostly in comments. Two error messages were also fixed (they were just elogs, so no translation work required). Two function comments in pg_proc.h were also fixed. Etsuro Fujita reported one of these, but I found a lot more with grep. Also fix a few other typos spotted while grepping for the a/an typos. For example, "consists out of ..." -> "consists of ...". Plus a "though"/ "through" mixup reported by Euler Taveira. Many of these typos were in old code, which would be nice to backpatch to make future backpatching easier. But much of the code was new, and I didn't feel like crafting separate patches for each branch. So no backpatching.
* Refactor ON CONFLICT index inference parse tree representation.Andres Freund2015-05-19
| | | | | | | | | | | | | | Defer lookup of opfamily and input type of a of a user specified opclass until the optimizer selects among available unique indexes; and store the opclass in the parse analyzed tree instead. The primary reason for doing this is that for rule deparsing it's easier to use the opclass than the previous representation. While at it also rename a variable in the inference code to better fit it's purpose. This is separate from the actual fixes for deparsing to make review easier.
* Fix parse tree of DROP TRANSFORM and COMMENT ON TRANSFORMPeter Eisentraut2015-05-18
| | | | | | | | | | The plain C string language name needs to be wrapped in makeString() so that the parse tree is copyable. This is detectable by -DCOPY_PARSE_PLAN_TREES. Add a test case for the COMMENT case. Also make the quoting in the error messages more consistent. discovered by Tom Lane
* pgcrypto: Report errant decryption as "Wrong key or corrupt data".Noah Misch2015-05-18
| | | | | | | | | | | | | | | | | This has been the predominant outcome. When the output of decrypting with a wrong key coincidentally resembled an OpenPGP packet header, pgcrypto could instead report "Corrupt data", "Not text data" or "Unsupported compression algorithm". The distinct "Corrupt data" message added no value. The latter two error messages misled when the decrypted payload also exhibited fundamental integrity problems. Worse, error message variance in other systems has enabled cryptologic attacks; see RFC 4880 section "14. Security Considerations". Whether these pgcrypto behaviors are likewise exploitable is unknown. In passing, document that pgcrypto does not resist side-channel attacks. Back-patch to 9.0 (all supported versions). Security: CVE-2015-3167
* Use += not = to set makefile variables after including base makefiles.Tom Lane2015-05-17
| | | | | | | | The previous coding in hstore_plpython and ltree_plpython wiped out any values set by the base makefiles. This at least had the effect of running the tests in "regression" not "contrib_regression" as expected. These being pretty new modules, there might be other bad effects we'd not noticed yet.
* pg_audit Makefile, REINDEX changesStephen Frost2015-05-17
| | | | | | Clean up the Makefile, per Michael Paquier. Classify REINDEX as we do in core, use '1.0' for the version, per Fujii.
* Fix typos in commentsMagnus Hagander2015-05-17
| | | | Dmitriy Olshevskiy