aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils
Commit message (Collapse)AuthorAge
...
* Centralize json and jsonb handling of datetime typesAndrew Dunstan2018-01-16
| | | | | | | | | | | | | The creates a single function JsonEncodeDateTime which will format these data types in an efficient and consistent manner. This will be all the more important when we come to jsonpath so we don't have to implement yet more code doing the same thing in two more places. This also extends the code to handle time and timetz types which were not previously handled specially. This requires exposing the time2tm and timetz2tm functions. Patch from Nikita Glukhov
* Remove useless use of bit-masking macrosPeter Eisentraut2018-01-16
| | | | | | | | | | In this case, the macros SET_8_BYTES(), GET_8_BYTES(), SET_4_BYTES(), GET_4_BYTES() are no-ops, so we can just remove them. The plan is to perhaps remove them from the source code altogether, so we'll start here. Discussion: https://www.postgresql.org/message-id/5d51721a-69ef-2053-9172-599b539f0628@2ndquadrant.com
* Fix Latin spellingPeter Eisentraut2018-01-11
| | | | "c.f." should be "cf.".
* Give more accurate error message for dropping pinned portalPeter Eisentraut2018-01-10
| | | | | | The previous code gave the same error message for attempting to drop pinned and active portals, but those are separate states, so give separate error messages.
* Implement TZH and TZM timestamp format patternsAndrew Dunstan2018-01-09
| | | | | | | These are compatible with Oracle and required for the datetime template language for jsonpath in an upcoming patch. Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
* Update portal-related memory context names and APIPeter Eisentraut2018-01-09
| | | | | | | | | | | | | Rename PortalMemory to TopPortalContext, to avoid confusion with PortalContext and align naming with similar top-level memory contexts. Rename PortalData's "heap" field to portalContext. The "heap" naming seems quite antiquated and confusing. Also get rid of the PortalGetHeapMemory() macro and access the field directly, which we do for other portal fields, so this abstraction doesn't buy anything. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
* Fix typoAlvaro Herrera2018-01-03
| | | | | Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/d8jefpk4jtd.fsf@dalvik.ping.uio.no
* Update copyright for 2018Bruce Momjian2018-01-02
| | | | Backpatch-through: certain files through 9.3
* Protect against hypothetical memory leaks in RelationGetPartitionKeyAlvaro Herrera2017-12-27
| | | | | | | Also, fix a comment that commit 8a0596cb656e made obsolete. Reported-by: Robert Haas Discussion: http://postgr.es/m/CA+TgmoYbpuUUUp2GhYNwWm0qkah39spiU7uOiNXLz20ASfKYoA@mail.gmail.com
* Add polygon opclass for SP-GiSTTeodor Sigaev2017-12-25
| | | | | | | | | | | | | | Polygon opclass uses compress method feature of SP-GiST added earlier. For now it's a single operator class which uses this feature. SP-GiST actually indexes a bounding boxes of input polygons, so part of supported operations are lossy. Opclass uses most methods of corresponding opclass over boxes of SP-GiST and treats bounding boxes as point in 4D-space. Bump catalog version. Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me Reviewed-By: all authors + Darafei Praliaskouski Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
* Minor edits to catalog files and scriptsAlvaro Herrera2017-12-21
| | | | | | | | | This fixes a few typos and small mistakes; it also cleans a few minor stylistic issues. The biggest functional change is that Gen_fmgrtab.pl no longer knows the OID of language 'internal'. Author: John Naylor Discussion: https://postgr.es/m/CAJVSVGXAkwbk-A9QHHHf00N905kKisyQbaYwKqaRpze_gPXGfg@mail.gmail.com
* Get rid of copy_partition_keyAlvaro Herrera2017-12-21
| | | | | | | | | | | | That function currently exists to avoid leaking memory in CacheMemoryContext in case of trouble while the partition key is being built, but there's a better way: allocate everything in a memcxt that goes away if the current (sub)transaction fails, and once the partition key is built and no further errors can occur, make the memcxt permanent by making it a child of CacheMemoryContext. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20171027172730.eh2domlkpn4ja62m@alvherre.pgsql
* Avoid putting build-location-dependent strings into generated files.Tom Lane2017-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various Perl scripts we use to generate files were in the habit of printing things like "generated by $0" into their output files. That looks like a fine idea at first glance, but it results in non-reproducible output, because in VPATH builds $0 won't be just the name of the script file, but a full path for it. We'd prefer that you get identical results whether using VPATH or not, so this is a bad thing. Some of these places also printed their input file name(s), causing an additional hazard of the same type. Hence, establish a policy that thou shalt not print $0, nor input file pathnames, into output files (they're still allowed in error messages, though). Instead just write the script name verbatim. While we are at it, we can make these annotations more useful by giving the script's full relative path name within the PG source tree, eg instead of Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl. Not all of the changes made here actually affect any files shipped in finished tarballs today, but it seems best to apply the policy everyplace so that nobody copies unsafe code into places where it could matter. Christoph Berg and Tom Lane Discussion: https://postgr.es/m/20171215102223.GB31812@msg.df7cb.de
* Add parallel-aware hash joins.Andres Freund2017-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel Hash Join with Parallel Hash. While hash joins could already appear in parallel queries, they were previously always parallel-oblivious and had a partial subplan only on the outer side, meaning that the work of the inner subplan was duplicated in every worker. After this commit, the planner will consider using a partial subplan on the inner side too, using the Parallel Hash node to divide the work over the available CPU cores and combine its results in shared memory. If the join needs to be split into multiple batches in order to respect work_mem, then workers process different batches as much as possible and then work together on the remaining batches. The advantages of a parallel-aware hash join over a parallel-oblivious hash join used in a parallel query are that it: * avoids wasting memory on duplicated hash tables * avoids wasting disk space on duplicated batch files * divides the work of building the hash table over the CPUs One disadvantage is that there is some communication between the participating CPUs which might outweigh the benefits of parallelism in the case of small hash tables. This is avoided by the planner's existing reluctance to supply partial plans for small scans, but it may be necessary to estimate synchronization costs in future if that situation changes. Another is that outer batch 0 must be written to disk if multiple batches are required. A potential future advantage of parallel-aware hash joins is that right and full outer joins could be supported, since there is a single set of matched bits for each hashtable, but that is not yet implemented. A new GUC enable_parallel_hash is defined to control the feature, defaulting to on. Author: Thomas Munro Reviewed-By: Andres Freund, Robert Haas Tested-By: Rafia Sabih, Prabhat Sahu Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
* Add shared tuplestores.Andres Freund2017-12-18
| | | | | | | | | | | | | | | | SharedTuplestore allows multiple participants to write into it and then read the tuples back from it in parallel. Each reader receives partial results. For now it always uses disk files, but other buffering policies and other kinds of scans (ie each reader receives complete results) may be useful in future. The upcoming parallel hash join feature will use this facility. Author: Thomas Munro Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
* Fix typo on commentMagnus Hagander2017-12-18
| | | | Author: David Rowley
* Suppress compiler warning about no function return value.Tom Lane2017-12-17
| | | | | | | Compilers that don't know that ereport(ERROR) doesn't return complained about the new coding in scanint8() introduced by commit 101c7ee3e. Tweak coding to avoid the warning. Per buildfarm.
* Fix pruning of locked and updated tuples.Andres Freund2017-12-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously it was possible that a tuple was not pruned during vacuum, even though its update xmax (i.e. the updating xid in a multixact with both key share lockers and an updater) was below the cutoff horizon. As the freezing code assumed, rightly so, that that's not supposed to happen, xmax would be preserved (as a member of a new multixact or xmax directly). That causes two problems: For one the tuple is below the xmin horizon, which can cause problems if the clog is truncated or once there's an xid wraparound. The bigger problem is that that will break HOT chains, which in turn can lead two to breakages: First, failing index lookups, which in turn can e.g lead to constraints being violated. Second, future hot prunes / vacuums can end up making invisible tuples visible again. There's other harmful scenarios. Fix the problem by recognizing that tuples can be DEAD instead of RECENTLY_DEAD, even if the multixactid has alive members, if the update_xid is below the xmin horizon. That's safe because newer versions of the tuple will contain the locking xids. A followup commit will harden the code somewhat against future similar bugs and already corrupted data. Author: Andres Freund, with changes by Alvaro Herrera Reported-By: Daniel Wood Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de Backpatch: 9.3-
* Rethink MemoryContext creation to improve performance.Tom Lane2017-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes a number of interrelated changes to reduce the overhead involved in creating/deleting memory contexts. The key ideas are: * Include the AllocSetContext header of an aset.c context in its first malloc request, rather than allocating it separately in TopMemoryContext. This means that we now always create an initial or "keeper" block in an aset, even if it never receives any allocation requests. * Create freelists in which we can save and recycle recently-destroyed asets (this idea is due to Robert Haas). * In the common case where the name of a context is a constant string, just store a pointer to it in the context header, rather than copying the string. The first change eliminates a palloc/pfree cycle per context, and also avoids bloat in TopMemoryContext, at the price that creating a context now involves a malloc/free cycle even if the context never receives any allocations. That would be a loser for some common usage patterns, but recycling short-lived contexts via the freelist eliminates that pain. Avoiding copying constant strings not only saves strlen() and strcpy() overhead, but is an essential part of the freelist optimization because it makes the context header size constant. Currently we make no attempt to use the freelist for contexts with non-constant names. (Perhaps someday we'll need to think harder about that, but in current usage, most contexts with custom names are long-lived anyway.) The freelist management in this initial commit is pretty simplistic, and we might want to refine it later --- but in common workloads that will never matter because the freelists will never get full anyway. To create a context with a non-constant name, one is now required to call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME option. AllocSetContextCreate becomes a wrapper macro, and it includes a test that will complain about non-string-literal context name parameters on gcc and similar compilers. An unfortunate side effect of making AllocSetContextCreate a macro is that one is now *required* to use the size parameter abstraction macros (ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of writing out individual size parameters no longer works unless you switch to AllocSetContextCreateExtended. Internally to the memory-context-related modules, the context creation APIs are simplified, removing the rather baroque original design whereby a context-type module called mcxt.c which then called back into the context-type module. That saved a bit of code duplication, but not much, and it prevented context-type modules from exercising control over the allocation of context headers. In passing, I converted the test-and-elog validation of aset size parameters into Asserts to save a few more cycles. The original thought was that callers might compute size parameters on the fly, but in practice nobody does that, so it's useless to expend cycles on checking those numbers in production builds. Also, mark the memory context method-pointer structs "const", just for cleanliness. Discussion: https://postgr.es/m/2264.1512870796@sss.pgh.pa.us
* Add float.h include to int8.c, for isnan().Andres Freund2017-12-12
| | | | | | | | port.h redirects isnan() to _isnan() on windows, which in turn is provided by float.h rather than math.h. Therefore include the latter as well. Per buildfarm.
* Consistently use PG_INT(16|32|64)_(MIN|MAX).Andres Freund2017-12-12
| | | | Per buildfarm animal woodlouse.
* Use new overflow aware integer operations.Andres Freund2017-12-12
| | | | | | | | | | | | | | | | | A previous commit added inline functions that provide fast(er) and correct overflow checks for signed integer math. Use them in a significant portion of backend code. There's more to touch in both backend and frontend code, but these were the easily identifiable cases. The old overflow checks are noticeable in integer heavy workloads. A secondary benefit is that getting rid of overflow checks that rely on signed integer overflow wrapping around, will allow us to get rid of -fwrapv in the future. Which in turn slows down other code. Author: Andres Freund Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
* Remove obsolete comment.Robert Haas2017-12-12
| | | | | | | | | | | | Commit 8b304b8b72b0a60f1968d39f01cf817c8df863ec removed replacement selection, but left behind this comment text. The optimization to which the comment refers is not relevant without replacement selection, because if we had so few tuples as to require only one tape, we would have just completed the sort in memory. Peter Geoghegan Discussion: http://postgr.es/m/CAH2-WznqupLA8CMjp+vqzoe0yXu0DYYbQSNZxmgN76tLnAOZ_w@mail.gmail.com
* Support Parallel Append plan nodes.Robert Haas2017-12-05
| | | | | | | | | | | | | | | | | | | | When we create an Append node, we can spread out the workers over the subplans instead of piling on to each subplan one at a time, which should typically be a bit more efficient, both because the startup cost of any plan executed entirely by one worker is paid only once and also because of reduced contention. We can also construct Append plans using a mix of partial and non-partial subplans, which may allow for parallelism in places that otherwise couldn't support it. Unfortunately, this patch doesn't handle the important case of parallelizing UNION ALL by running each branch in a separate worker; the executor infrastructure is added here, but more planner work is needed. Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and Rajkumar Raghuwanshi. Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
* Clean up assorted messiness around AllocateDir() usage.Tom Lane2017-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes a couple of low-probability bugs that could lead to reporting an irrelevant errno value (and hence possibly a wrong SQLSTATE) concerning directory-open or file-open failures. It also fixes places where we took shortcuts in reporting such errors, either by using elog instead of ereport or by using ereport but forgetting to specify an errcode. And it eliminates a lot of just plain redundant error-handling code. In service of all this, export fd.c's formerly-static function ReadDirExtended, so that external callers can make use of the coding pattern dir = AllocateDir(path); while ((de = ReadDirExtended(dir, path, LOG)) != NULL) if they'd like to treat directory-open failures as mere LOG conditions rather than errors. Also fix FreeDir to be a no-op if we reach it with dir == NULL, as such a coding pattern would cause. Then, remove code at many call sites that was throwing an error or log message for AllocateDir failure, as ReadDir or ReadDirExtended can handle that job just fine. Aside from being a net code savings, this gets rid of a lot of not-quite-up-to-snuff reports, as mentioned above. (In some places these changes result in replacing a custom error message such as "could not open tablespace directory" with more generic wording "could not open directory", but it was agreed that the custom wording buys little as long as we report the directory name.) In some other call sites where we can't just remove code, change the error reports to be fully project-style-compliant. Also reorder code in restoreTwoPhaseData that was acquiring a lock between AllocateDir and ReadDir; in the unlikely but surely not impossible case that LWLockAcquire changes errno, AllocateDir failures would be misreported. There is no great value in opening the directory before acquiring TwoPhaseStateLock, so just do it in the other order. Also fix CheckXLogRemoved to guarantee that it preserves errno, as quite a number of call sites are implicitly assuming. (Again, it's unlikely but I think not impossible that errno could change during a SpinLockAcquire. If so, this function was broken for its own purposes as well as breaking callers.) And change a few places that were using not-per-project-style messages, such as "could not read directory" when "could not open directory" is more correct. Back-patch the exporting of ReadDirExtended, in case we have occasion to back-patch some fix that makes use of it; it's not needed right now but surely making it global is pretty harmless. Also back-patch the restoreTwoPhaseData and CheckXLogRemoved fixes. The rest of this is essentially cosmetic and need not get back-patched. Michael Paquier, with a bit of additional work by me Discussion: https://postgr.es/m/CAB7nPqRpOCxjiirHmebEFhXVTK7V5Jvw4bz82p7Oimtsm3TyZA@mail.gmail.com
* SQL proceduresPeter Eisentraut2017-11-30
| | | | | | | | | | | | | | | | | | | | | This adds a new object type "procedure" that is similar to a function but does not have a return type and is invoked by the new CALL statement instead of SELECT or similar. This implementation is aligned with the SQL standard and compatible with or similar to other SQL implementations. This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well as ALTER/DROP ROUTINE that can refer to either a function or a procedure (or an aggregate function, as an extension to SQL). There is also support for procedures in various utility commands such as COMMENT and GRANT, as well as support in pg_dump and psql. Support for defining procedures is available in all the languages supplied by the core distribution. While this commit is mainly syntax sugar around existing functionality, future features will rely on having procedures as a separate object type. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
* Fix neqjoinsel's behavior for semi/anti join cases.Tom Lane2017-11-29
| | | | | | | | | | | | | | | | | | | | | | | Previously, this function estimated the selectivity as 1 minus eqjoinsel() for the negator equality operator, regardless of join type (I think there was an expectation that eqjoinsel would handle the join type). But actually this is completely wrong for semijoin cases: the fraction of the LHS that has a non-matching row is not one minus the fraction of the LHS that has a matching row. In reality a semijoin with <> will nearly always succeed: it can only fail when the RHS is empty, or it contains a single distinct value that is equal to the particular LHS value, or the LHS value is null. The only one of those things we should have much confidence in estimating is the fraction of LHS values that are null, so let's just take the selectivity as 1 minus outer nullfrac. Per coding convention, antijoin should be estimated the same as semijoin. Arguably this is a bug fix, but in view of the lack of field complaints and the risk of destabilizing plans, no back-patch. Thomas Munro, reviewed by Ashutosh Bapat Discussion: https://postgr.es/m/CAEepm=270ze2hVxWkJw-5eKzc3AB4C9KpH3L2kih75R5pdSogg@mail.gmail.com
* Update typedefs.list and re-run pgindentRobert Haas2017-11-29
| | | | Discussion: http://postgr.es/m/CA+TgmoaA9=1RWKtBWpDaj+sF3Stgc8sHgf5z=KGtbjwPLQVDMA@mail.gmail.com
* Make has_sequence_privilege support WITH GRANT OPTIONJoe Conway2017-11-26
| | | | | | | | | | | The various has_*_privilege() functions all support an optional WITH GRANT OPTION added to the supported privilege types to test whether the privilege is held with grant option. That is, all except has_sequence_privilege() variations. Fix that. Back-patch to all supported branches. Discussion: https://postgr.es/m/005147f6-8280-42e9-5a03-dd2c1e4397ef@joeconway.com
* Improve planner's handling of set-returning functions in grouping columns.Tom Lane2017-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve query_is_distinct_for() to accept SRFs in the targetlist when we can prove distinctness from a DISTINCT clause. In that case the de-duplication will surely happen after SRF expansion, so the proof still works. Continue to punt in the case where we'd try to prove distinctness from GROUP BY (or, in the future, source relations). To do that, we'd have to determine whether the SRFs were in the grouping columns or elsewhere in the tlist, and it still doesn't seem worth the trouble. But this trivial change allows us to recognize that "SELECT DISTINCT unnest(foo) FROM ..." produces unique-ified output, which seems worth having. Also, fix estimate_num_groups() to consider the possibility of SRFs in the grouping columns. Its failure to do so was masked before v10 because grouping_planner() scaled up plan rowcount estimates by the estimated SRF multiplier after performing grouping. That doesn't happen anymore, which is more correct, but it means we need an adjustment in the estimate for the number of groups. Failure to do this leads to an underestimate for the number of output rows of subqueries like "SELECT DISTINCT unnest(foo)" compared to what 9.6 and earlier estimated, thus breaking plan choices in some cases. Per report from Dmitry Shalashov. Back-patch to v10 to avoid degraded plan choices compared to previous releases. Discussion: https://postgr.es/m/CAKPeCUGAeHgoh5O=SvcQxREVkoX7UdeJUMj1F5=aBNvoTa+O8w@mail.gmail.com
* Improve valgrind logic in aset.c, and fix multiple issues in generation.c.Tom Lane2017-11-24
| | | | | | | | | | | | | | | | | | | | | Revise aset.c so that all the "private" fields of chunk headers are marked NOACCESS when outside the module, improving on the previous coding which protected only requested_size. Fix a couple of corner case bugs, such as failing to re-protect the header during a failure exit from AllocSetRealloc, and wrong padding-size calculation for an oversize allocation request. Apply the same design to generation.c, and also fix several bugs therein that I found by dint of hacking the code to use generation.c as the standard allocator and then running the core regression tests with it. Notably, we have to track the actual size of each block, else the wipe_mem call in GenerationReset clears the wrong amount of memory for an oversize-chunk block; and GenerationCheck needs a way of identifying freed chunks that isn't fooled by palloc(0). I chose to fix the latter by resetting the context pointer to NULL in a freed chunk, roughly like what happens in a freed aset.c chunk. Discussion: https://postgr.es/m/E1eHa4J-0006hI-Q8@gemulon.postgresql.org
* Mostly-cosmetic improvements in memory chunk header alignment coding.Tom Lane2017-11-24
| | | | | | | | | | | | | | | Add commentary about what we're doing and why. Apply the method used for padding in GenerationChunk to AllocChunkData, replacing the rather ad-hoc solution used in commit 7e3aa03b4. Reorder fields in GenerationChunk so that the padding calculation will work even if sizeof(size_t) is different from sizeof(void *) --- likely that will never happen, but we don't need the assumption if we do it like this. Improve static assertions about alignment. In passing, fix a couple of oversights in the "large chunk" path in GenerationAlloc(). Discussion: https://postgr.es/m/E1eHa4J-0006hI-Q8@gemulon.postgresql.org
* Fix bug in generation.c's valgrind support.Tom Lane2017-11-24
| | | | | | | | | This doesn't look like the last such bug, but it's one that the test_decoding regression test is tripping over. Per buildfarm. Tomas Vondra Discussion: https://postgr.es/m/c903f275-2150-fa52-64bf-dca7b53ebf8d@fuzzy.cz
* Ensure sizeof(GenerationChunk) is maxaligned.Tom Lane2017-11-23
| | | | | | Per buildfarm. Also improve some comments.
* Tweak code for older compilersSimon Riggs2017-11-23
| | | | | | Attempt to quiesce build farm Author: Tomas Vondra
* Generational memory allocatorSimon Riggs2017-11-23
| | | | | | | | | | | | Add new style of memory allocator, known as Generational appropriate for use in cases where memory is allocated and then freed in roughly oldest first order (FIFO). Use new allocator for logical decoding’s reorderbuffer to significantly reduce memory usage and improve performance. Author: Tomas Vondra Reviewed-by: Simon Riggs
* Fix pg_control_checkpoint from commit 4b0d28de06Simon Riggs2017-11-21
| | | | | Author: Simon Riggs <simon@2ndQuadrant.com> Reported-By: Andreas Seltenreich <seltenreich@gmx.de>
* Fix compiler warning in rangetypes_spgist.c.Tom Lane2017-11-18
| | | | | | | | | | On gcc 7.2.0, comparing pointer to (Datum) 0 produces a warning. Treat it as a simple pointer to avoid that; this is more consistent with comparable code elsewhere, anyway. Tomas Vondra Discussion: https://postgr.es/m/99410021-61ef-9a9a-9bc8-f733ece637ee@2ndquadrant.com
* Merge near-duplicate code in RI triggers.Tom Lane2017-11-18
| | | | | | | | | | | | | | | | | | | Merge ri_restrict_del and ri_restrict_upd into one function ri_restrict. Create a function ri_setnull that is the common implementation of RI_FKey_setnull_del and RI_FKey_setnull_upd. Likewise create a function ri_setdefault that is the common implementation of RI_FKey_setdefault_del and RI_FKey_setdefault_upd. All of these pairs of functions were identical except for needing to check for no-actual-key-change in the UPDATE cases; the one extra if-test is a small price to pay for saving so much code. Aside from removing about 400 lines of essentially duplicate code, this allows us to recognize that we were uselessly caching two identical plans whenever there were pairs of triggers using these duplicated functions (which is likely very common). Ildar Musin, reviewed by Ildus Kurbangaliev Discussion: https://postgr.es/m/ca7064a7-6adc-6f22-ca47-8615ba9425a5@postgrespro.ru
* Improve to_date/to_number/to_timestamp behavior with multibyte characters.Tom Lane2017-11-18
| | | | | | | | | | | | | | | | | | | | | | | | | The documentation says that these functions skip one input character per literal (non-pattern) format character. Actually, though, they skipped one input *byte* per literal *byte*, which could be hugely confusing if either data or format contained multibyte characters. To fix, adjust the FormatNode representation and parse_format() so that multibyte format characters are stored as one FormatNode not several, and adjust the data-skipping bits to advance by pg_mblen() not necessarily one byte. There's no user-visible behavior change on the to_char() side, although the internal representation changes. Commit e87d4965b had already fixed most places where we skip characters on the basis of non-literal format patterns to advance by characters not bytes, but this gets one more place, the SKIP_THth macro. I think everything in formatting.c gets that right now. It'd be nice to have some regression test cases covering this behavior; but of course there's no way to do so in an encoding-agnostic way, and many of the interesting aspects would also require unportable locale selections. So I've not bothered here. Discussion: https://postgr.es/m/28186.1510957703@sss.pgh.pa.us
* Fix quoted-substring handling in format parsing for to_char/to_number/etc.Tom Lane2017-11-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This code evidently intended to treat backslash as an escape character within double-quoted substrings, but it was sufficiently confused that cases like ..."foo\\"... did not work right: the second backslash managed to quote the double-quote after it, despite being quoted itself. Rewrite to get that right, while preserving the existing behavior outside double-quoted substrings, which is that backslash isn't special except in the combination \". Comparing to Oracle, it seems that their version of to_char() for timestamps allows literal alphanumerics only within double quotes, while non-alphanumerics are allowed outside quotes; backslashes aren't special anywhere; there is no way at all to emit a literal double quote. (Bizarrely, their to_char() for numbers is different; it doesn't allow literal text at all AFAICT.) The fact that they don't treat backslash as special justifies our existing behavior for backslash outside double quotes. I considered making backslash inside double quotes act the same way (ie, special only if before "), which in a green field would be a more consistent behavior. But that would likely break more existing SQL code than what this patch does. Add some test cases illustrating this behavior. (Only the last new case actually changes behavior in this commit.) Little of this behavior was documented, either, so fix that. Discussion: https://postgr.es/m/3626.1510949486@sss.pgh.pa.us
* Update postgresql.conf.sample comment for bgwriter_lru_maxpagesRobert Haas2017-11-17
| | | | | | | | | Commit 14ca9abfbe4643408ad6ed3279f2f6366cafb3f1 should have done this, but did not. Jeff Janes Discussion: http://postgr.es/m/CAMkU=1yWOvL+YFYzGM9yXSoWjxr_5_Ny78pPzLKQCkfgB7H-JQ@mail.gmail.com
* Prevent to_number() from losing data when template doesn't match exactly.Tom Lane2017-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | Non-data template patterns would consume characters whether or not those characters were what the pattern expected, for example SELECT TO_NUMBER('1234', '9,999'); produced 134 because the '2' got eaten by the comma pattern. This seems undesirable, not least because it doesn't happen in Oracle. For the ',' and 'G' template patterns, we can fix this by consuming characters only if they match what the pattern would output. For non-data patterns such as 'L' and 'TH', it seems impractical to tighten things up to the point of consuming only exact matches to what the pattern would output; but we can improve matters quite a lot by redefining the behavior as "consume only characters that aren't digits, signs, decimal point, or comma". Also, fix it so that the behavior is to consume the number of *characters* the pattern would output, not the number of *bytes*. The old coding would do surprising things with non-ASCII currency symbols, for example. (It would be good to apply that rule for literal text as well, but this commit only fixes it for non-data patterns.) Oliver Ford, reviewed by Thomas Munro and Nathan Wagner, and whacked around a bit more by me Discussion: https://postgr.es/m/CAGMVOdvpbMqPf9XWNzOwBpzJfErkydr_fEGhmuDGa015z97mwg@mail.gmail.com
* Make PL/Python handle domain-type conversions correctly.Tom Lane2017-11-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix PL/Python so that it can handle domains over composite, and so that it enforces domain constraints correctly in other cases that were not always done properly before. Notably, it didn't do arrays of domains right (oversight in commit c12d570fa), and it failed to enforce domain constraints when returning a composite type containing a domain field, and if a transform function is being used for a domain's base type then it failed to enforce domain constraints on the result. Also, in many places it missed checking domain constraints on null values, because the plpy_typeio code simply wasn't called for Py_None. Rather than try to band-aid these problems, I made a significant refactoring of the plpy_typeio logic. The existing design of recursing for array and composite members is extended to also treat domains as containers requiring recursion, and the APIs for the module are cleaned up and simplified. The patch also modifies plpy_typeio to rely on the typcache more than it did before (which was pretty much not at all). This reduces the need for repetitive lookups, and lets us get rid of an ad-hoc scheme for detecting changes in composite types. I added a couple of small features to typcache to help with that. Although some of this is fixing bugs that long predate v11, I don't think we should risk a back-patch: it's a significant amount of code churn, and there've been no complaints from the field about the bugs. Tom Lane, reviewed by Anthony Bykov Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
* Update postgresql.conf.sample to match pg_settings classificaitons.Robert Haas2017-11-16
| | | | | | | | | | | | | A handful of settings, most notably shared_preload_libraries, were just plain the wrong place compared to their assigned config_group value in guc.c (and thus pg_settings). In other cases the names of the sections in postgresql.conf.sample were mildly different from the corresponding entries in config_group_names[]. Make it all consistent. Adrián Escoms, reviewed by me. Discussion: http://postgr.es/m/CACksPC2veEmFRYqwYepWYO9U7aFhAx6sYq+WqjTyHw7uV=E=pw@mail.gmail.com
* Back out the session_start and session_end hooks feature.Andrew Dunstan2017-11-16
| | | | | | | | | | It's become apparent during testing that there are problems with at least the testing regime. I don't think we should have it without a working test regime, and the difficulties might indicate implementation problems anyway, so I'm backing out the whole thing until that's sorted out. This reverts commits 7459484 9989f92 cd8ce3a
* Add hooks for session start and session endAndrew Dunstan2017-11-15
| | | | | | | | | | These hooks can be used in loadable modules. A simple test module is included. Discussion: https://postgr.es/m/20170720204733.40f2b7eb.nagata@sraoss.co.jp Fabrízio de Royes Mello and Yugo Nagata Reviewed by Michael Paquier and Aleksandr Parfenov
* Fix typo.Robert Haas2017-11-15
| | | | | | Jesper Pedersen Discussion: http://postgr.es/m/000f92d6-f623-95a5-b341-46e2c0495cea@redhat.com
* Add parallel_leader_participation GUC.Robert Haas2017-11-15
| | | | | | | | | | | Sometimes, for testing, it's useful to have the leader do nothing but read tuples from workers; and it's possible that could work out better even in production. Thomas Munro, reviewed by Amit Kapila and by me. A few final tweaks by me. Discussion: http://postgr.es/m/CAEepm=2U++Lp3bNTv2Bv_kkr5NE2pOyHhxU=G0YTa4ZhSYhHiw@mail.gmail.com
* Don't call pgwin32_message_to_UTF16() without CurrentMemoryContext.Noah Misch2017-11-12
| | | | | | | | | | | | | PostgreSQL running as a Windows service crashed upon calling write_stderr() before MemoryContextInit(). This fix completes work started in 5735efee15540765315aa8c1a230575e756037f7. Messages this early contain only ASCII bytes; if we removed the CurrentMemoryContext requirement, the ensuing conversions would have no effect. Back-patch to 9.3 (all supported versions). Takayuki Tsunakawa, reviewed by Michael Paquier. Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F80CC73@G01JPEXMBYT05