aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Refactor hash_agg_entry_size().Jeff Davis2020-02-06
| | | | | | Consolidate the calculations for hash table size estimation. This will help with upcoming Hash Aggregation work that will add additional call sites.
* Logical Tape Set: use min heap for freelist.Jeff Davis2020-02-06
| | | | | | | | Previously, the freelist of blocks was tracked as an occasionally-sorted array. A min heap is more resilient to larger freelists or more frequent changes between reading and writing. Discussion: https://postgr.es/m/97c46a59c27f3c38e486ca170fcbc618d97ab049.camel%40j-davis.com
* Fix typo.Amit Kapila2020-02-06
| | | | | | | Reported-by: Amit Langote Author: Amit Langote Backpatch-through: 9.6, where it was introduced Discussion: https://postgr.es/m/CA+HiwqFNADeukaaGRmTqANbed9Fd81gLi08AWe_F86_942Gspw@mail.gmail.com
* Fix bug in LWLock statistics mechanism.Fujii Masao2020-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously PostgreSQL built with -DLWLOCK_STATS could report more than one LWLock statistics entries for the same backend process and the same LWLock. This is strange and only one statistics should be output in that case, instead. The cause of this issue is that the key variable used for LWLock stats hash table was not fully initialized. The key consists of two fields and they were initialized. But the following 4 bytes allocated in the key variable for the alignment was not initialized. So even if the same key was specified, hash_search(HASH_ENTER) could not find the existing entry for that key and created new one. This commit fixes this issue by initializing the key variable with zero. As the side effect of this commit, the volume of LWLock statistics output would be reduced very much. Back-patch to v10, where commit 3761fe3c20 introduced the issue. Author: Fujii Masao Reviewed-by: Julien Rouhaud, Kyotaro Horiguchi Discussion: https://postgr.es/m/26359edb-798a-568f-d93a-6aafac49752d@oss.nttdata.com
* Add leader_pid to pg_stat_activityMichael Paquier2020-02-06
| | | | | | | | | | | | | | | This new field tracks the PID of the group leader used with parallel query. For parallel workers and the leader, the value is set to the PID of the group leader. So, for the group leader, the value is the same as its own PID. Note that this reflects what PGPROC stores in shared memory, so as leader_pid is NULL if a backend has never been involved in parallel query. If the backend is using parallel query or has used it at least once, the value is set until the backend exits. Author: Julien Rouhaud Reviewed-by: Sergei Kornilov, Guillaume Lelarge, Michael Paquier, Tomas Vondra Discussion: https://postgr.es/m/CAOBaU_Yy5bt0vTPZ2_LUM6cUcGeqmYNoJ8-Rgto+c2+w3defYA@mail.gmail.com
* Force tuple conversion when the source has missing attributes.Andrew Gierth2020-02-05
| | | | | | | | | | | | | | | | | | | | | Tuple conversion incorrectly concluded that no conversion was needed as long as all the attributes lined up. But if the source tuple has a missing attribute (from addition of a column with default), then the destination tupdesc might not reflect the same default. The typical symptom was that the affected columns would be unexpectedly NULL. Repair by always forcing conversion if the source has missing attributes, which will be filled in by the deform operation. (In theory we could optimize for when the destination has the same default, but that seemed overkill.) Backpatch to 11 where missing attributes were added. Per bug #16242. Vik Fearing (discovery, code, testing) and me (analysis, testcase). Discussion: https://postgr.es/m/16242-d1c9fca28445966b@postgresql.org
* Make vacuum buffer counters 64 bits wideAlvaro Herrera2020-02-05
| | | | | | | | | | | | Using 32 bit counters means they can now realistically wrap around when vacuuming extremely large tables. Because they're signed integers, stats printed by vacuum look very odd when they do. We'd love to backpatch this, but refrain because the variables are exported and could cause third-party code to break. Reviewed-by: Julien Rouhaud, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/20200131205926.GA16367@alvherre.pgsql
* Add kqueue(2) support to the WaitEventSet API.Thomas Munro2020-02-05
| | | | | | | | | | | Use kevent(2) to wait for events on the BSD family of operating systems and macOS. This is similar to the epoll(2) support added for Linux by commit 98a64d0bd. Author: Thomas Munro Reviewed-by: Andres Freund, Marko Tiikkaja, Tom Lane Tested-by: Mateusz Guzik, Matteo Beccati, Keith Fiske, Heikki Linnakangas, Michael Paquier, Peter Eisentraut, Rui DeSousa, Tom Lane, Mark Wong Discussion: https://postgr.es/m/CAEepm%3D37oF84-iXDTQ9MrGjENwVGds%2B5zTr38ca73kWR7ez_tA%40mail.gmail.com
* Handle lack of DSM slots in parallel btree build, take 2.Thomas Munro2020-02-05
| | | | | | | | | | Commit 74618e77 added a new check intended to fix a bug, but put it in the wrong place so that parallel btree build was always disabled. Do the check after we've actually tried to create a DSM segment. Back-patch to 11, like the earlier commit. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-WzmDABkJzrNnvf%2BOULK-_A_j9gkYg_Dz-H62jzNv4eKQTw%40mail.gmail.com
* Fix handling of "Subplans Removed" field in EXPLAIN output.Tom Lane2020-02-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 499be013d added this field in a rather poorly-thought-through manner, with the result being that rather than being a field of the Append or MergeAppend plan node as intended (and as it seems to be, in text format), it was actually an element of the "Plans" subgroup. At least in JSON format, that's flat out invalid syntax, because "Plans" is an array not an object. While it's not hard to move the generation of the field so that it appears where it's supposed to, this does result in a visible change in field order in text format, in cases where a Append or MergeAppend plan node has any InitPlans attached. That's slightly annoying to do in stable branches; but the alternative of continuing to emit broken non-text formats seems worse. Also, since the set of fields emitted is not supposed to be data-dependent in non-text formats, make sure that "Subplans Removed" appears in Append and MergeAppend nodes even when it's zero, in those formats. (The previous coding made it look like it could appear in some other node types such as BitmapAnd, but we don't actually support runtime pruning there, so don't emit it in those cases.) Per bug #16171 from Mahadevan Ramachandran. Fix by Daniel Gustafsson and Tom Lane, reviewed by Hamid Akhtar. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/16171-b72259ab75505fa2@postgresql.org
* Add missing break out seqscan loop in logical replicationAlvaro Herrera2020-02-03
| | | | | | | | | | | | | When replica identity is FULL (an admittedly unusual case), the loop that searches for tuples in execReplication.c didn't stop scanning the table when once a matching tuple was found. Add the missing 'break'. Note slight behavior change: we now return the first matching tuple rather than the last one. They are supposed to be indistinguishable anyway, so this shouldn't matter. Author: Konstantin Knizhnik Discussion: https://postgr.es/m/379743f6-ae91-b866-f7a2-5624e6d2b0a4@postgrespro.ru
* Add declaration-level assertions for compile-time checksMichael Paquier2020-02-03
| | | | | | | | | | | Those new assertions can be used at file scope, outside of any function for compilation checks. This commit provides implementations for C and C++, and fallback implementations. Author: Peter Smith Reviewed-by: Andres Freund, Kyotaro Horiguchi, Dagfinn Ilmari Mannsåker, Michael Paquier Discussion: https://postgr.es/m/201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217
* Optimizations for integer to decimal output.Andrew Gierth2020-02-01
| | | | | | | | | | Using a lookup table of digit pairs reduces the number of divisions needed, and calculating the length upfront saves some work; these ideas are taken from the code previously committed for floats. David Fetter, reviewed by Kyotaro Horiguchi, Tels, and me. Discussion: https://postgr.es/m/20190924052620.GP31596%40fetter.org
* Fix memory leak on DSM slot exhaustion.Thomas Munro2020-02-01
| | | | | | | | | | | | | | If we attempt to create a DSM segment when no slots are available, we should return the memory to the operating system. Previously we did that if the DSM_CREATE_NULL_IF_MAXSEGMENTS flag was passed in, but we didn't do it if an error was raised. Repair. Back-patch to 9.4, where DSM segments arrived. Author: Thomas Munro Reviewed-by: Robert Haas Reported-by: Julian Backes Discussion: https://postgr.es/m/CA%2BhUKGKAAoEw-R4om0d2YM4eqT1eGEi6%3DQot-3ceDR-SLiWVDw%40mail.gmail.com
* Fix not-quite-right string comparison in parse_jsonb_index_flags().Tom Lane2020-01-31
| | | | | | | | | | | | | This code would accept "strinX", where X is any 1-byte character, as meaning "string". Clearly it wasn't meant to do that. No back-patch, since this doesn't affect correct queries and there's some tiny chance we'd break somebody's incorrect query in a minor release. Report and patch by Dominik Czarnota. Discussion: https://postgr.es/m/CABEVAa1dU0mDCAfaT8WF2adVXTDsLVJy_izotg6ze_hh-cn8qQ@mail.gmail.com
* Fix CheckAttributeType's handling of collations for ranges.Tom Lane2020-01-31
| | | | | | | | | | | | | | | | | | | | Commit fc7695891 changed CheckAttributeType to recurse into ranges, but made it pass down the wrong collation (always InvalidOid, since ranges as such have no collation). This would result in guaranteed failure when considering a range type whose subtype is collatable. Embarrassingly, we lack any regression tests that would expose such a problem (but fortunately, somebody noticed before we shipped this bug in any release). Fix it to pass down the range's subtype collation property instead, and add some regression test cases to exercise collatable-subtype ranges a bit more. Back-patch to all supported branches, as the previous patch was. Report and patch by Julien Rouhaud, test cases tweaked by me Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
* Sprinkle some const decorationsPeter Eisentraut2020-01-31
| | | | This might help clarify the API a bit.
* Report time spent in posix_fallocate() as a wait event.Thomas Munro2020-01-31
| | | | | | | | | When allocating DSM segments with posix_fallocate() on Linux (see commit 899bd785), report this activity as a wait event exactly as we would if we were using file-backed DSM rather than shm_open()-backed DSM. Author: Thomas Munro Discussion: https://postgr.es/m/CA%2BhUKGKCSh4GARZrJrQZwqs5SYp0xDMRr9Bvb%2BHQzJKvRgL6ZA%40mail.gmail.com
* Adjust DSM and DSA slot usage constants.Thomas Munro2020-01-31
| | | | | | | | | | | | | | | | | | | | | When running a lot of large parallel queries concurrently, or a plan with a lot of separate Gather nodes, it is possible to run out of DSM slots. There are better solutions to these problems requiring architectural redesign work, but for now, let's adjust the constants so that it's more difficult to hit the limit. 1. Previously, a DSA area would create up to four segments at each size before doubling the size. After this commit, it will create only two at each size, so it ramps up faster and therefore needs fewer slots. 2. Previously, the total limit on DSM slots allowed for 2 per connection. Switch to 5 per connection. Also remove an obsolete nearby comment. Author: Thomas Munro Reviewed-by: Robert Haas, Andres Freund Discussion: https://postre.es/m/CA%2BhUKGL6H2BpGbiF7Lj6QiTjTGyTLW_vLR%3DSn2tEBeTcYXiMKw%40mail.gmail.com
* Handle lack of DSM slots in parallel btree build.Thomas Munro2020-01-31
| | | | | | | | | | | | | If no DSM slots are available, a ParallelContext can still be created, but its seg pointer is NULL. Teach parallel btree build to cope with that by falling back to a regular non-parallel build, to avoid crashing with a segmentation fault. Back-patch to 11, where parallel CREATE INDEX landed. Reported-by: Nicola Contu Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CA%2BhUKGJgJEBnkuODBVomyK3MWFvDBbMVj%3Dgdt6DnRPU-5sQ6UQ%40mail.gmail.com
* Clean up newlines following left parenthesesAlvaro Herrera2020-01-30
| | | | | | | | | | | | We used to strategically place newlines after some function call left parentheses to make pgindent move the argument list a few chars to the left, so that the whole line would fit under 80 chars. However, pgindent no longer does that, so the newlines just made the code vertically longer for no reason. Remove those newlines, and reflow some of those lines for some extra naturality. Reviewed-by: Michael Paquier, Tom Lane Discussion: https://postgr.es/m/20200129200401.GA6303@alvherre.pgsql
* Remove excess parens in ereport() callsAlvaro Herrera2020-01-30
| | | | | | | Cosmetic cleanup, not worth backpatching. Discussion: https://postgr.es/m/20200129200401.GA6303@alvherre.pgsql Reviewed-by: Tom Lane, Michael Paquier
* Make inherited TRUNCATE perform access permission checks on parent table only.Fujii Masao2020-01-31
| | | | | | | | | | | | | | Previously, TRUNCATE command through a parent table checked the permissions on not only the parent table but also the children tables inherited from it. This was a bug and inherited queries should perform access permission checks on the parent table only. This commit fixes that bug. Back-patch to all supported branches. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAHGQGwFHdSvifhJE+-GSNqUHSfbiKxaeQQ7HGcYz6SC2n_oDcg@mail.gmail.com
* Fix slot data persistency when advancing physical replication slotsMichael Paquier2020-01-30
| | | | | | | | | | | | | | | | | | | Advancing a physical replication slot with pg_replication_slot_advance() did not mark the slot as dirty if any advancing was done, preventing the follow-up checkpoint to flush the slot data to disk. This caused the advancing to be lost even on clean restarts. This does not happen for logical slots as any advancing marked the slot as dirty. Per discussion, the original feature has been implemented so as in the event of a crash the slot may move backwards to a past LSN. This property is kept and more documentation is added about that. This commit adds some new TAP tests to check the persistency of physical and logical slots after advancing across clean restarts. Author: Alexey Kondratov, Michael Paquier Reviewed-by: Andres Freund, Kyotaro Horiguchi, Craig Ringer Discussion: https://postgr.es/m/059cc53a-8b14-653a-a24d-5f867503b0ee@postgrespro.ru Backpatch-through: 11
* Invent "trusted" extensions, and remove the pg_pltemplate catalog.Tom Lane2020-01-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch creates a new extension property, "trusted". An extension that's marked that way in its control file can be installed by a non-superuser who has the CREATE privilege on the current database, even if the extension contains objects that normally would have to be created by a superuser. The objects within the extension will (by default) be owned by the bootstrap superuser, but the extension itself will be owned by the calling user. This allows replicating the old behavior around trusted procedural languages, without all the special-case logic in CREATE LANGUAGE. We have, however, chosen to loosen the rules slightly: formerly, only a database owner could take advantage of the special case that allowed installation of a trusted language, but now anyone who has CREATE privilege can do so. Having done that, we can delete the pg_pltemplate catalog, moving the knowledge it contained into the extension script files for the various PLs. This ends up being no change at all for the in-core PLs, but it is a large step forward for external PLs: they can now have the same ease of installation as core PLs do. The old "trusted PL" behavior was only available to PLs that had entries in pg_pltemplate, but now any extension can be marked trusted if appropriate. This also removes one of the stumbling blocks for our Python 2 -> 3 migration, since the association of "plpythonu" with Python 2 is no longer hard-wired into pg_pltemplate's initial contents. Exactly where we go from here on that front remains to be settled, but one problem is fixed. Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others. Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
* Move jsonapi.c and jsonapi.h to src/common.Robert Haas2020-01-29
| | | | | | | | | | | To make this work, (1) makeJsonLexContextCstringLen now takes the encoding to be used as an argument; (2) check_stack_depth() is made to do nothing in frontend code, and (3) elog(ERROR, ...) is changed to pg_log_fatal + exit in frontend code. Mark Dilger, reviewed and slightly revised by me. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
* Fail if recovery target is not reachedPeter Eisentraut2020-01-29
| | | | | | | | | | | | Before, if a recovery target is configured, but the archive ended before the target was reached, recovery would end and the server would promote without further notice. That was deemed to be pretty wrong. With this change, if the recovery target is not reached, it is a fatal error. Based-on-patch-by: Leif Gunnar Erlandsen <leif@lako.no> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no
* Fix dangling pointer in EvalPlanQual machinery.Tom Lane2020-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | EvalPlanQualStart() supposed that it could re-use the relsubs_rowmark and relsubs_done arrays from a prior instantiation. But since they are allocated in the es_query_cxt of the recheckestate, that's just wrong; EvalPlanQualEnd() will blow away that storage. Therefore we were using storage that could have been reallocated to something else, causing all sorts of havoc. I think this was modeled on the old code's handling of es_epqTupleSlot, but since the code was anyway clearing the arrays at re-use, there's clearly no expectation of importing any outside state. So it's just a dubious savings of a couple of pallocs, which is negligible compared to setting up a new planstate tree. Therefore, just allocate the arrays always. (I moved the allocations slightly for readability.) In principle this bug could cause a problem whenever EPQ rechecks are needed in more than one target table of a ModifyTable plan node. In practice it seems not quite so easy to trigger as that; I couldn't readily duplicate a crash with a partitioned target table, for instance. That's probably down to incidental choices about when to free or reallocate stuff. The added isolation test case does seem to reliably show an assertion failure, though. Per report from Oleksii Kliukin. Back-patch to v12 where the bug was introduced (evidently by commit 3fb307bc4). Discussion: https://postgr.es/m/EEF05F66-2871-4786-992B-5F45C92FEE2E@hintbits.com
* Fix randAccess setting in ReadRecord()Heikki Linnakangas2020-01-28
| | | | | | | Commit 38a957316d got this backwards. Author: Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/20200128.194408.2260703306774646445.horikyota.ntt@gmail.com
* Fix compile error on HP C.Thomas Munro2020-01-28
| | | | Per build farm animal anole, after commit 6f38d4dac3.
* Don't reset latch in ConditionVariablePrepareToSleep().Thomas Munro2020-01-28
| | | | | | | | | | | | | | | | | | | | It's not OK to do that without calling CHECK_FOR_INTERRUPTS(). Let the next wait loop deal with it, following the usual pattern. One consequence of this bug was that a SIGTERM delivered in a very narrow timing window could leave a parallel worker process waiting forever for a condition variable that will never be signaled, after an error was raised in other process. The code is a bit different in the stable branches due to commit 1321509f, making problems less likely there. No back-patch for now, but we may finish up deciding to make a similar change after more discussion. Author: Thomas Munro Reviewed-by: Shawn Debnath Reported-by: Tomas Vondra Discussion: https://postgr.es/m/CA%2BhUKGJOm8zZHjVA8svoNT3tHY0XdqmaC_kHitmgXDQM49m1dA%40mail.gmail.com
* Added relation name in error messages for constraint checks.Amit Kapila2020-01-28
| | | | | | | | | | This gives more information to the user about the error and it makes such messages consistent with the other similar messages in the code. Reported-by: Simon Riggs Author: Mahendra Singh and Simon Riggs Reviewed-by: Beena Emerson and Amit Kapila Discussion: https://postgr.es/m/CANP8+j+7YUvQvGxTrCiw77R23enMJ7DFmyA3buR+fa2pKs4XhA@mail.gmail.com
* Add connection parameters to control SSL protocol min/max in libpqMichael Paquier2020-01-28
| | | | | | | | | | | | | | | These two new parameters, named sslminprotocolversion and sslmaxprotocolversion, allow to respectively control the minimum and the maximum version of the SSL protocol used for the SSL connection attempt. The default setting is to allow any version for both the minimum and the maximum bounds, causing libpq to rely on the bounds set by the backend when negotiating the protocol to use for an SSL connection. The bounds are checked when the values are set at the earliest stage possible as this makes the checks independent of any SSL implementation. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Cary Huang Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
* Remove dependency on HeapTuple from predicate locking functions.Thomas Munro2020-01-28
| | | | | | | | | | | | | | | | The following changes make the predicate locking functions more generic and suitable for use by future access methods: - PredicateLockTuple() is renamed to PredicateLockTID(). It takes ItemPointer and inserting transaction ID instead of HeapTuple. - CheckForSerializableConflictIn() takes blocknum instead of buffer. - CheckForSerializableConflictOut() no longer takes HeapTuple or buffer. Author: Ashwin Agrawal Reviewed-by: Andres Freund, Kuntal Ghosh, Thomas Munro Discussion: https://postgr.es/m/CALfoeiv0k3hkEb3Oqk%3DziWqtyk2Jys1UOK5hwRBNeANT_yX%2Bng%40mail.gmail.com
* Apply project best practices to switches over enum values.Tom Lane2020-01-27
| | | | | | | | | In the wake of 1f3a02173, assorted buildfarm members were warning about "control reaches end of non-void function" or the like. Do what we've done elsewhere: in place of a "default" switch case that will prevent the compiler from warning about unhandled enum values, put a catchall elog() after the switch. And return a dummy value to satisfy compilers that don't know elog() doesn't return.
* Move some code from jsonapi.c to jsonfuncs.c.Robert Haas2020-01-27
| | | | | | | | | | | | | | Specifically, move those functions that depend on ereport() from jsonapi.c to jsonfuncs.c, in preparation for allowing jsonapi.c to be used from frontend code. A few cases where elog(ERROR, ...) is used for can't-happen conditions are left alone; we can handle those in some other way in frontend code. Reviewed by Mark Dilger and Andrew Dunstan. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
* Adjust pg_parse_json() so that it does not directly ereport().Robert Haas2020-01-27
| | | | | | | | | | | | | | | | | | | | | Instead, it now returns a value indicating either success or the type of error which occurred. The old behavior is still available by calling pg_parse_json_or_ereport(). If the new interface is used, an error can be thrown by passing the return value of pg_parse_json() to json_ereport_error(). pg_parse_json() can still elog() in can't-happen cases, but it seems like that issue is best handled separately. Adjust json_lex() and json_count_array_elements() to return an error code, too. This is all in preparation for making the backend's json parser available to frontend code. Reviewed and/or tested by Mark Dilger and Andrew Dunstan. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
* Avoid unnecessary shm writes in Parallel Hash Join.Thomas Munro2020-01-27
| | | | | | | | | | | | | | Currently, Parallel Hash Join cannot be used for full/right joins, so there is no point in setting the match flag. It turns out that the cache coherence traffic generated by those writes slows down large systems running many-core joins, so let's stop doing that. In future, if we need to use match bits in parallel joins, we might want to consider setting them only if not already set. Back-patch to 11, where Parallel Hash Join arrived. Reported-by: Deng, Gang Discussion: https://postgr.es/m/0F44E799048C4849BAE4B91012DB910462E9897A%40SHSMSX103.ccr.corp.intel.com
* Fix some memory leaks and improve restricted token handling on WindowsMichael Paquier2020-01-27
| | | | | | | | | | | | | | | The leaks have been detected by a Coverity run on Windows. No backpatch is done as the leaks are minor. While on it, make restricted token creation more consistent in its error handling by logging an error instead of a warning if missing advapi32.dll, which was missing in the NT4 days. Any modern platform should have this DLL around. Now, if the library is not there, an error is still reported back to the caller, and nothing is done do there is no behavior change done in this commit. Author: Ranier Vilela Discussion: https://postgr.es/m/CAEudQApa9MG0foPkgPX87fipk=vhnF2Xfg+CfUyR08h4R7Mywg@mail.gmail.com
* Fix EXPLAIN (SETTINGS) to follow policy about when to print empty fields.Tom Lane2020-01-26
| | | | | | | | | | | | | | | | | | | | | | In non-TEXT output formats, the "Settings" field should appear when requested, even if it would be empty. Also, get rid of the premature optimization of counting all the GUC_EXPLAIN variables at startup. Since there was no provision for adjusting that count later, all it'd take would be some extension marking a parameter as GUC_EXPLAIN to risk an assertion failure or memory stomp. We could make get_explain_guc_options() count those variables on-the-fly, or dynamically resize its array ... but TBH I do not think that making a transient array of pointers a bit smaller is worth any extra complication, especially when you consider all the other transient space EXPLAIN eats. So just allocate that array at the max possible size. In HEAD, also add some regression test coverage for this feature. Because of the memory-stomp hazard, back-patch to v12 where this feature was added. Discussion: https://postgr.es/m/19416.1580069629@sss.pgh.pa.us
* Refactor confusing code in _mdfd_openseg().Thomas Munro2020-01-27
| | | | | | | | | | | | | As reported independently by a couple of people, _mdfd_openseg() is coded in a way that seems to imply that the segments could be opened in an order that isn't strictly sequential. Even if that were true, it's also using the wrong comparison. It's not an active bug, since the condition is always true anyway, but it's confusing, so replace it with an assertion. Author: Thomas Munro Reviewed-by: Andres Freund, Kyotaro Horiguchi, Noah Misch Discussion: https://postgr.es/m/CA%2BhUKG%2BNBw%2BuSzxF1os-SO6gUuw%3DcqO5DAybk6KnHKzgGvxhxA%40mail.gmail.com Discussion: https://postgr.es/m/20191222091930.GA1280238%40rfd.leadboat.com
* Refactor XLogReadRecord(), adding XLogBeginRead() function.Heikki Linnakangas2020-01-26
| | | | | | | | | | | | | | | | | | | | The signature of XLogReadRecord() required the caller to pass the starting WAL position as argument, or InvalidXLogRecPtr to continue reading at the end of previous record. That's slightly awkward to the callers, as most of them don't want to randomly jump around in the WAL stream, but start reading at one position and then read everything from that point onwards. Remove the 'RecPtr' argument and add a new function XLogBeginRead() to specify the starting position instead. That's more convenient for the callers. Also, xlogreader holds state that is reset when you change the starting position, so having a separate function for doing that feels like a more natural fit. This changes XLogFindNextRecord() function so that it doesn't reset the xlogreader's state to what it was before the call anymore. Instead, it positions the xlogreader to the found record, like XLogBeginRead(). Reviewed-by: Kyotaro Horiguchi, Alvaro Herrera Discussion: https://www.postgresql.org/message-id/5382a7a3-debe-be31-c860-cb810c08f366%40iki.fi
* Clean up EXPLAIN's handling of per-worker details.Tom Lane2020-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, it was possible for EXPLAIN ANALYZE of a parallel query to produce several different "Workers" fields for a single plan node, because different portions of explain.c independently generated per-worker data and wrapped that output in separate fields. This is pretty bogus, especially for the structured output formats: even if it's not technically illegal, most programs would have a hard time dealing with such data. To improve matters, add infrastructure that allows redirecting per-worker values into a side data structure, and then collect that data into a single "Workers" field after we've finished running all the relevant code for a given plan node. There are a few visible side-effects: * In text format, instead of something like Sort Method: external merge Disk: 4920kB Worker 0: Sort Method: external merge Disk: 5880kB Worker 1: Sort Method: external merge Disk: 5920kB Buffers: shared hit=682 read=10188, temp read=1415 written=2101 Worker 0: actual time=130.058..130.324 rows=1324 loops=1 Buffers: shared hit=337 read=3489, temp read=505 written=739 Worker 1: actual time=130.273..130.512 rows=1297 loops=1 Buffers: shared hit=345 read=3507, temp read=505 written=744 you get Sort Method: external merge Disk: 4920kB Buffers: shared hit=682 read=10188, temp read=1415 written=2101 Worker 0: actual time=130.058..130.324 rows=1324 loops=1 Sort Method: external merge Disk: 5880kB Buffers: shared hit=337 read=3489, temp read=505 written=739 Worker 1: actual time=130.273..130.512 rows=1297 loops=1 Sort Method: external merge Disk: 5920kB Buffers: shared hit=345 read=3507, temp read=505 written=744 * When JIT is enabled, any relevant per-worker JIT stats are attached to the child node of the Gather or Gather Merge node, which is where the other per-worker output has always been. Previously, that info was attached directly to a Gather node, or missed entirely for Gather Merge. * A query's summary JIT data no longer includes a bogus "Worker Number: -1" field. A notable code-level change is that indenting for lines of text-format output should now be handled by calling "ExplainIndentText(es)", instead of hard-wiring how much space to emit. This seems a good deal cleaner anyway. This patch also adds a new "explain.sql" regression test script that's dedicated to testing EXPLAIN. There is more that can be done in that line, certainly, but for now it just adds some coverage of the XML and YAML output formats, which had been completely untested. Although this is surely a bug fix, it's not clear that people would be happy with rearranging EXPLAIN output in a minor release, so apply to HEAD only. Maciek Sakrejda and Tom Lane, based on an idea of Andres Freund's; reviewed by Georgios Kokolatos Discussion: https://postgr.es/m/CAOtHd0AvAA8CLB9Xz0wnxu1U=zJCKrr1r4QwwXi_kcQsHDVU=Q@mail.gmail.com
* Add functions gcd() and lcm() for integer and numeric types.Dean Rasheed2020-01-25
| | | | | | | | | These compute the greatest common divisor and least common multiple of a pair of numbers using the Euclidean algorithm. Vik Fearing, reviewed by Fabien Coelho. Discussion: https://postgr.es/m/adbd3e0b-e3f1-5bbc-21db-03caf1cef0f7@2ndquadrant.com
* Remove jsonapi.c's lex_accept().Robert Haas2020-01-24
| | | | | | | | | | | | | | | | At first glance, this function seems useful, but it actually increases the amount of code required rather than decreasing it. Inline the logic into the callers instead; most callers don't use the 'lexeme' argument for anything and as a result considerable simplification is possible. Along the way, fix the header comment for the nearby function lex_expect(), which mislabeled it as lex_accept(). Patch by me, reviewed by David Steele, Mark Dilger, and Andrew Dunstan. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
* Split JSON lexer/parser from 'json' data type support.Robert Haas2020-01-24
| | | | | | | | | | | | | | | | | | | | Keep the code that pertains to the 'json' data type in json.c, but move the lexing and parsing code to a new file jsonapi.c, a name I chose because the corresponding prototypes are in jsonapi.h. This seems like a logical division, because the JSON lexer and parser are also used by the 'jsonb' data type, but the SQL-callable functions in json.c are a separate thing. Also, the new jsonapi.c file needs to include far fewer header files than json.c, which seems like a good sign that this is an appropriate place to insert an abstraction boundary. I took the opportunity to remove a few apparently-unneeded includes from json.c at the same time. Patch by me, reviewed by David Steele, Mark Dilger, and Andrew Dunstan. The previous commit was, too, but I forgot to note it in the commit message. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
* Adjust src/include/utils/jsonapi.h so it's not backend-only.Robert Haas2020-01-24
| | | | | | | | | | | | | | | The major change here is that we no longer include jsonb.h into jsonapi.h. The reason that was necessary is that jsonapi.h included several prototypes functions in jsonfuncs.c that depend on the Jsonb type. Move those prototypes to a new header, jsonfuncs.h, and include it where needed. The other change is that JsonEncodeDateTime is now declared in json.h rather than jsonapi.h. Taken together, these steps eliminate all dependencies of jsonapi.h on backend-only data types and header files, so that it can potentially be included in frontend code.
* Add pg_file_sync() to adminpack extension.Fujii Masao2020-01-24
| | | | | | | | | | | This function allows us to fsync the specified file or directory. It's useful, for example, when we want to sync the file that pg_file_write() writes out or that COPY TO exports the data into, for durability. Author: Fujii Masao Reviewed-By: Julien Rouhaud, Arthur Zakirov, Michael Paquier, Atsushi Torikoshi Discussion: https://www.postgresql.org/message-id/CAHGQGwGY8uzZ_k8dHRoW1zDcy1Z7=5GQ+So4ZkVy2u=nLsk=hA@mail.gmail.com
* Fix an oversight in commit 4c70098ff.Tom Lane2020-01-23
| | | | | | | | | | | | | | | | | | | | I had supposed that the from_char_seq_search() call sites were all passing the constant arrays you'd expect them to pass ... but on looking closer, the one for DY format was passing the days[] array not days_short[]. This accidentally worked because the day abbreviations in English are all the same as the first three letters of the full day names. However, once we took out the "maximum comparison length" logic, it stopped working. As penance for that oversight, add regression test cases covering this, as well as every other switch case in DCH_from_char() that was not reached according to the code coverage report. Also, fold the DCH_RM and DCH_rm cases into one --- now that seq_search is case independent, there's no need to pass different comparison arrays for those cases. Back-patch, as the previous commit was.
* Clean up formatting.c's logic for matching constant strings.Tom Lane2020-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | seq_search(), which is used to match input substrings to constants such as month and day names, had a lot of bizarre and unnecessary behaviors. It was mostly possible to avert our eyes from that before, but we don't want to duplicate those behaviors in the upcoming patch to allow recognition of non-English month and day names. So it's time to clean this up. In particular: * seq_search scribbled on the input string, which is a pretty dangerous thing to do, especially in the badly underdocumented way it was done here. Fortunately the input string is a temporary copy, but that was being made three subroutine levels away, making it something easy to break accidentally. The behavior is externally visible nonetheless, in the form of odd case-folding in error reports about unrecognized month/day names. The scribbling is evidently being done to save a few calls to pg_tolower, but that's such a cheap function (at least for ASCII data) that it's pretty pointless to worry about. In HEAD I switched it to be pg_ascii_tolower to ensure it is cheap in all cases; but there are corner cases in Turkish where this'd change behavior, so leave it as pg_tolower in the back branches. * seq_search insisted on knowing the case form (all-upper, all-lower, or initcap) of the constant strings, so that it didn't have to case-fold them to perform case-insensitive comparisons. This likewise seems like excessive micro-optimization, given that pg_tolower is certainly very cheap for ASCII data. It seems unsafe to assume that we know the case form that will come out of pg_locale.c for localized month/day names, so it's better just to define the comparison rule as "downcase all strings before comparing". (The choice between downcasing and upcasing is arbitrary so far as English is concerned, but it might not be in other locales, so follow citext's lead here.) * seq_search also had a parameter that'd cause it to report a match after a maximum number of characters, even if the constant string were longer than that. This was not actually used because no caller passed a value small enough to cut off a comparison. Replicating that behavior for localized month/day names seems expensive as well as useless, so let's get rid of that too. * from_char_seq_search used the maximum-length parameter to truncate the input string in error reports about not finding a matching name. This leads to rather confusing reports in many cases. Worse, it is outright dangerous if the input string isn't all-ASCII, because we risk truncating the string in the middle of a multibyte character. That'd lead either to delivering an illegible error message to the client, or to encoding-conversion failures that obscure the actual data problem. Get rid of that in favor of truncating at whitespace if any (a suggestion due to Alvaro Herrera). In addition to fixing these things, I const-ified the input string pointers of DCH_from_char and its subroutines, to make sure there aren't any other scribbling-on-input problems. The risk of generating a badly-encoded error message seems like enough of a bug to justify back-patching, so patch all supported branches. Discussion: https://postgr.es/m/29432.1579731087@sss.pgh.pa.us