aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils/adt
Commit message (Collapse)AuthorAge
...
* Remove still more useless assignments.Tom Lane2020-09-04
| | | | | | | | Fix some more things scan-build pointed to as dead stores. In some of these cases, rearranging the code a little leads to more readable code IMO. It's all cosmetic, though. Discussion: https://postgr.es/m/CAEudQAo1+AcGppxDSg8k+zF4+Kv+eJyqzEDdbpDg58-=MQcerQ@mail.gmail.com
* remove redundant initializationsBruce Momjian2020-09-03
| | | | | | | | | | Reported-by: Ranier Vilela Discussion: https://postgr.es/m/CAEudQAo1+AcGppxDSg8k+zF4+Kv+eJyqzEDdbpDg58-=MQcerQ@mail.gmail.com Author: Ranier Vilela Backpatch-through: master
* Add string_to_table() function.Tom Lane2020-09-02
| | | | | | | | | | | | | | | This splits a string at occurrences of a delimiter. It is exactly like string_to_array() except for producing a set of values instead of an array of values. Thus, the relationship of these two functions is the same as between regexp_split_to_table() and regexp_split_to_array(). Although the same results could be had from unnest(string_to_array()), this is somewhat faster than that, and anyway it seems reasonable to have it for symmetry with the regexp functions. Pavel Stehule, reviewed by Peter Smith Discussion: https://postgr.es/m/CAFj8pRD8HOpjq2TqeTBhSo_QkzjLOhXzGCpKJ4nCs7Y9SQkuPw@mail.gmail.com
* Move codes for pg_backend_memory_contexts from mmgr/mcxt.c to adt/mcxtfuncs.c.Fujii Masao2020-08-26
| | | | | | | | | | | Previously the codes for pg_backend_memory_contexts were in src/backend/utils/mmgr/mcxt.c. This commit moves them to src/backend/utils/adt/mcxtfuncs.c so that mcxt.c basically includes only the low-level interface for memory contexts. Author: Atsushi Torikoshi Reviewed-by: Michael Paquier, Fujii Masao Discussion: https://postgr.es/m/20200819135545.GC19121@paquier.xyz
* Prevent concurrent SimpleLruTruncate() for any given SLRU.Noah Misch2020-08-15
| | | | | | | | | | | | | | | | | The SimpleLruTruncate() header comment states the new coding rule. To achieve this, add locktype "frozenid" and two LWLocks. This closes a rare opportunity for data loss, which manifested as "apparent wraparound" or "could not access status of transaction" errors. Data loss is more likely in pg_multixact, due to released branches' thin margin between multiStopLimit and multiWrapLimit. If a user's physical replication primary logged ": apparent wraparound" messages, the user should rebuild standbys of that primary regardless of symptoms. At less risk is a cluster having emitted "not accepting commands" errors or "must be vacuumed" warnings at some point. One can test a cluster for this data loss by running VACUUM FREEZE in every database. Back-patch to 9.5 (all supported versions). Discussion: https://postgr.es/m/20190218073103.GA1434723@rfd.leadboat.com
* Remove obsolete HAVE_BUGGY_SOLARIS_STRTODPeter Eisentraut2020-08-15
| | | | | | | Fixed more than 10 years ago. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://www.postgresql.org/message-id/flat/aa266ede-baaa-f4e6-06cf-5b1737610e9a%402ndquadrant.com
* snapshot scalability: Don't compute global horizons while building snapshots.Andres Freund2020-08-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
* Replace remaining StrNCpy() by strlcpy()Peter Eisentraut2020-08-10
| | | | | | | | | | | | | | | | | They are equivalent, except that StrNCpy() zero-fills the entire destination buffer instead of providing just one trailing zero. For all but a tiny number of callers, that's just overhead rather than being desirable. Remove StrNCpy() as it is now unused. In some cases, namestrcpy() is the more appropriate function to use. While we're here, simplify the API of namestrcpy(): Remove the return value, don't check for NULL input. Nothing was using that anyway. Also, remove a few unused name-related functions. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/44f5e198-36f6-6cdb-7fa9-60e34784daae%402ndquadrant.com
* Fix recently-introduced performance problem in ts_headline().Tom Lane2020-07-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new hlCover() algorithm that I introduced in commit c9b0c678d turns out to potentially take O(N^2) or worse time on long documents, if there are many occurrences of individual query words but few or no substrings that actually satisfy the query. (One way to hit this behavior is with a "common_word & rare_word" type of query.) This seems unavoidable given the original goal of checking every substring of the document, so we have to back off that idea. Fortunately, it seems unlikely that anyone would really want headlines spanning all of a long document, so we can avoid the worse-than-linear behavior by imposing a maximum length of substring that we'll consider. For now, just hard-wire that maximum length as a multiple of max_words times max_fragments. Perhaps at some point somebody will argue for exposing it as a ts_headline parameter, but I'm hesitant to make such a feature addition in a back-patched bug fix. I also noted that the hlFirstIndex() function I'd added in that commit was unnecessarily stupid: it really only needs to check whether a HeadlineWordEntry's item pointer is null or not. This wouldn't make all that much difference in typical cases with queries having just a few terms, but a cycle shaved is a cycle earned. In addition, add a CHECK_FOR_INTERRUPTS call in TS_execute_recurse. This ensures that hlCover's loop is cancellable if it manages to take a long time, and it may protect some other TS_execute callers as well. Back-patch to 9.6 as the previous commit was. I also chose to add the CHECK_FOR_INTERRUPTS call to 9.5. The old hlCover() algorithm seems to avoid the O(N^2) behavior, at least on the test case I tried, but nonetheless it's not very quick on a long document. Per report from Stephen Frost. Discussion: https://postgr.es/m/20200724160535.GW12375@tamriel.snowman.net
* Add hash_mem_multiplier GUC.Peter Geoghegan2020-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a GUC that acts as a multiplier on work_mem. It gets applied when sizing executor node hash tables that were previously size constrained using work_mem alone. The new GUC can be used to preferentially give hash-based nodes more memory than the generic work_mem limit. It is intended to enable admin tuning of the executor's memory usage. Overall system throughput and system responsiveness can be improved by giving hash-based executor nodes more memory (especially over sort-based alternatives, which are often much less sensitive to being memory constrained). The default value for hash_mem_multiplier is 1.0, which is also the minimum valid value. This means that hash-based nodes continue to apply work_mem in the traditional way by default. hash_mem_multiplier is generally useful. However, it is being added now due to concerns about hash aggregate performance stability for users that upgrade to Postgres 13 (which added disk-based hash aggregation in commit 1f39bce0). While the old hash aggregate behavior risked out-of-memory errors, it is nevertheless likely that many users actually benefited. Hash agg's previous indifference to work_mem during query execution was not just faster; it also accidentally made aggregation resilient to grouping estimate problems (at least in cases where this didn't create destabilizing memory pressure). hash_mem_multiplier can provide a certain kind of continuity with the behavior of Postgres 12 hash aggregates in cases where the planner incorrectly estimates that all groups (plus related allocations) will fit in work_mem/hash_mem. This seems necessary because hash-based aggregation is usually much slower when only a small fraction of all groups can fit. Even when it isn't possible to totally avoid hash aggregates that spill, giving hash aggregation more memory will reliably improve performance (the same cannot be said for external sort operations, which appear to be almost unaffected by memory availability provided it's at least possible to get a single merge pass). The PostgreSQL 13 release notes should advise users that increasing hash_mem_multiplier can help with performance regressions associated with hash aggregation. That can be taken care of by a later commit. Author: Peter Geoghegan Reviewed-By: Álvaro Herrera, Jeff Davis Discussion: https://postgr.es/m/20200625203629.7m6yvut7eqblgmfo@alap3.anarazel.de Discussion: https://postgr.es/m/CAH2-WzmD%2Bi1pG6rc1%2BCjc4V6EaFJ_qSuKCCHVnH%3DoruqD-zqow%40mail.gmail.com Backpatch: 13-, where disk-based hash aggregation was introduced.
* Fix incorrect print format in json.cMichael Paquier2020-07-29
| | | | | | | | Oid is unsigned, so %u needs to be used and not %d. The code path involved here is not normally reachable, so no backpatch is done. Author: Justin Pryzby Discussion: https://postgr.es/m/20200728015523.GA27308@telsasoft.com
* Tweak behavior of pg_stat_activity.leader_pidMichael Paquier2020-07-26
| | | | | | | | | | | | | | | | | | | | | | | The initial implementation of leader_pid in pg_stat_activity added by b025f32 took the approach to strictly print what a PGPROC entry includes. In short, if a backend has been involved in parallel query at least once, leader_pid would remain set as long as the backend is alive. For a parallel group leader, this means that the field would always be set after it participated at least once in parallel query, and after more discussions this could be confusing if using for example a connection pooler. This commit changes the data printed so as leader_pid becomes always NULL for a parallel group leader, showing up a non-NULL value only for the parallel workers, and actually as long as a parallel query is running as workers are shut down once the query has completed. This does not change the definition of any catalog, so no catalog bump is needed. Per discussion with Justin Pryzby, Álvaro Herrera, Julien Rouhaud and me. Discussion: https://postgr.es/m/20200721035145.GB17300@paquier.xyz Backpatch-through: 13
* Replace TS_execute's TS_EXEC_CALC_NOT flag with TS_EXEC_SKIP_NOT.Tom Lane2020-07-24
| | | | | | | | | | | | | | | | | | | | | It's fairly silly that ignoring NOT subexpressions is TS_execute's default behavior. It's wrong on its face and it encourages errors of omission. Moreover, the only two remaining callers that aren't specifying CALC_NOT are in ts_headline calculations, and it's very arguable that those are bugs: if you've specified "!foo" in your query, why would you want to get a headline that includes "foo"? Hence, rip that out and change the default behavior to be to calculate NOT accurately. As a concession to the slim chance that there is still somebody somewhere who needs the incorrect behavior, provide a new SKIP_NOT flag to explicitly request that. Back-patch into v13, mainly because it seems better to change this at the same time as the previous commit's rejiggering of TS_execute related APIs. Any outside callers affected by this change are probably also affected by that one. Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
* Fix assorted bugs by changing TS_execute's callback API to ternary logic.Tom Lane2020-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | Text search sometimes failed to find valid matches, for instance '!crew:A'::tsquery might fail to locate 'crew:1B'::tsvector during an index search. The root of the issue is that TS_execute's callback functions were not changed to use ternary (yes/no/maybe) reporting when we made the search logic itself do so. It's somewhat annoying to break that API, but on the other hand we now see that any code using plain boolean logic is almost certainly broken since the addition of phrase search. There seem to be very few outside callers of this code anyway, so we'll just break them intentionally to get them to adapt. This allows removal of tsginidx.c's private re-implementation of TS_execute, since that's now entirely duplicative. It's also no longer necessary to avoid use of CALC_NOT in tsgistidx.c, since the underlying callbacks can now do something reasonable. Back-patch into v13. We can't change this in stable branches, but it seems not quite too late to fix it in v13. Tom Lane and Pavel Borisov Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
* Support infinity and -infinity in the numeric data type.Tom Lane2020-07-22
| | | | | | | | | | | | | | | Add infinities that behave the same as they do in the floating-point data types. Aside from any intrinsic usefulness these may have, this closes an important gap in our ability to convert floating values to numeric and/or replace float-based APIs with numeric. The new values are represented by bit patterns that were formerly not used (although old code probably would take them for NaNs). So there shouldn't be any pg_upgrade hazard. Patch by me, reviewed by Dean Rasheed and Andrew Gierth Discussion: https://postgr.es/m/606717.1591924582@sss.pgh.pa.us
* neqjoinsel must now pass through collation to eqjoinsel.Tom Lane2020-07-21
| | | | | | | | | | | | | Since commit 044c99bc5, eqjoinsel passes the passed-in collation to any operators it invokes. However, neqjoinsel failed to pass on whatever collation it got, so that if we invoked a collation-dependent operator via that code path, we'd get "could not determine which collation to use for string comparison" or the like. Per report from Justin Pryzby. Back-patch to v12, like the previous commit. Discussion: https://postgr.es/m/20200721191606.GL5748@telsasoft.com
* Weaken type-OID-matching checks in array_recv and record_recv.Tom Lane2020-07-21
| | | | | | | | | | | | | | | | | | | | | Rather than always insisting on an exact match of the type OID in the data to the element type or column type we expect, complain only when both OIDs fall within the manually-assigned range. This acknowledges the reality that user-defined types don't have stable OIDs, while still preserving some of the mistake-detection value of the old test. (It's not entirely clear whether to error if one OID is manually assigned and the other isn't. But perhaps that case could arise in cross-version cases where a former extension type has been imported into core, so I let it pass.) This change allows us to remove the prohibition on binary transfer of user-defined arrays and composites in the recently-landed support for binary logical replication (commit 9de77b545). We can just unconditionally drop that check, since if the client has asked for binary transfer it must be >= v14 and must have this change. Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
* Fix some corner cases for window ranges with infinite offsets.Tom Lane2020-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Many situations where the offset is infinity were not handled sanely. We should generally allow the val versus base +/- offset comparison to proceed according to the normal rules of IEEE arithmetic; however, we must do something special for the corner cases where base +/- offset would produce NaN due to subtracting two like-signed infinities. That corresponds to asking which values infinitely precede +inf or infinitely follow -inf, which should certainly be true of any finite value or of the opposite-signed infinity. After some discussion it seems that the best decision is to make it true of the same-signed infinity as well, ie, just return constant TRUE if the calculation would produce a NaN. (We could write this with a bit less code by subtracting anyway, and then checking for a NaN result. However, I prefer this formulation because it'll be easier to transpose into numeric.c.) Although this seems like clearly a bug fix with respect to finite values, it is less obviously correct for infinite values. Between that and the fact that the whole issue only arises for very strange window specifications (e.g. RANGE BETWEEN 'inf' PRECEDING AND 'inf' PRECEDING), I'll desist from back-patching. Noted by Dean Rasheed. Discussion: https://postgr.es/m/3393130.1594925893@sss.pgh.pa.us
* Fix whitespacePeter Eisentraut2020-07-17
|
* Eliminate cache lookup errors in SQL functions for object addressesMichael Paquier2020-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | When using the following functions, users could see various types of errors of the type "cache lookup failed for OID XXX" with elog(), that can only be used for internal errors: * pg_describe_object() * pg_identify_object() * pg_identify_object_as_address() The set of APIs managing object addresses for all object types are made smarter by gaining a new argument "missing_ok" that allows any caller to control if an error is raised or not on an undefined object. The SQL functions listed above are changed to handle the case where an object is missing. Regression tests are added for all object types for the cases where these are undefined. Before this commit, these cases failed with cache lookup errors, and now they basically return NULL (minus the name of the object type requested). Author: Michael Paquier Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson, Álvaro Herrera, Kyotaro Horiguchi Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
* Forbid numeric NaN in jsonpathAlexander Korotkov2020-07-11
| | | | | | | | | | | | | | SQL standard doesn't define numeric Inf or NaN values. It appears even more ridiculous to support then in jsonpath assuming JSON doesn't support these values as well. This commit forbids returning NaN from .double(), which was previously allowed. NaN can't be result of inner-jsonpath computation over non-NaNs. So, we can not expect NaN in the jsonpath output. Reported-by: Tom Lane Discussion: https://postgr.es/m/203949.1591879542%40sss.pgh.pa.us Author: Alexander Korotkov Reviewed-by: Tom Lane Backpatch-through: 12
* Improve error reporting for jsonpath .double() methodAlexander Korotkov2020-07-11
| | | | | | | | | | | When jsonpath .double() method detects that numeric or string can't be converted to double precision, it throws an error. This commit makes these errors explicitly express the reason of failure. Discussion: https://postgr.es/m/CAPpHfdtqJtiSXkP7tOXez18NxhLUH_-75bL8%3DOce4Ki%2Bbv7V6Q%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Tom Lane Backpatch-through: 12
* Fix pg_current_logfile() to not emit a carriage return on Windows.Tom Lane2020-07-09
| | | | | | | | | | | | | | | | | | | | Due to not having our signals straight about CRLF vs. LF line termination, the output of pg_current_logfile() included a trailing \r on Windows. To fix, force the file descriptor it uses into text mode. While here, move a couple of local variable declarations to make the function's logic clearer. In v12 and v13, also back-patch the test added by 1c4e88e2f so that this function has some test coverage. However, the 004_logrotate.pl test script doesn't exist before v12, and it didn't seem worth adding to older branches just for this. Per report from Thomas Kellerer. Back-patch to v10 where this function was added. Discussion: https://postgr.es/m/412ae8da-76bb-640f-039a-f3513499e53d@gmx.net
* Don't create pg_type entries for sequences or toast tables.Tom Lane2020-07-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit f7f70d5e2 left one inconsistency behind: we're still creating pg_type entries for the composite types of sequences and toast tables, but not arrays over those composites. But there seems precious little reason to have named composite types for toast tables, and not much more to have them for sequences (especially given the thought that sequences may someday not be standalone relations at all). So, let's close that inconsistency by removing these composite types, rather than adding arrays for them. This buys back a little bit of the initial pg_type bloat added by the previous patch, and could be a significant savings in a large database with many toast tables. Aside from a small logic rearrangement in heap_create_with_catalog, this patch mostly needs to clean up some places that were assuming that pg_class.reltype always has a valid value. Those are really pre-existing bugs, given that it's documented otherwise; notably, the plpgsql changes fix code that gives "cache lookup failed for type 0" on indexes today. But none of these seem interesting enough to back-patch. Also, remove the pg_dump/pg_upgrade infrastructure for propagating a toast table's pg_type OID into the new database, since we no longer need that. Discussion: https://postgr.es/m/761F1389-C6A8-4C15-80CE-950C961F5341@gmail.com
* Refactor routines for name lookups of procedures and operatorsMichael Paquier2020-07-06
| | | | | | | | | | | | | | | | | | | This introduces a new set of extended routines for procedure and operator name lookups, with a flag bitmask argument that can modify the result. The following options are available: - Force schema qualification, ignoring search_path. This is similar to the existing option for format_{operator|procedure}_qualified(). - Force NULL as result instead of a numeric OID for an undefined object. This option is new. This is a refactoring similar to 1185c78, that will be used for a future patch to improve the SQL functions providing information using object addresses for undefined objects. Author: Michael Paquier Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson, Álvaro Herrera Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
* Add new flag to format_type_extended() to get NULL for undefined typeMichael Paquier2020-07-06
| | | | | | | | | | | | | | | | | | | | | | If a type scanned is undefined, type format routines have two behaviors depending on if FORMAT_TYPE_ALLOW_INVALID is used by the caller or not: - Issue a cache lookup error - Return an undefined type name "???", "???[]" or "-" The current interface is not really helpful for callers willing to format properly a type name, but still make sure that the type is defined as there could be types matching the strings generated when looking for an undefined type, even if that should not be a problem in practice. In order to counter that, add a new flag called FORMAT_TYPE_INVALID_AS_NULL that returns a NULL result instead of "??? or "-" which does not generate an error. This flag will be used in a follow-up patch improving the set of SQL functions showing information for object addresses when it comes to undefined objects. Author: Michael Paquier Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson, Álvaro Herrera Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
* Fix "ignoring return value" complaints from commit 96d1f423f9Joe Conway2020-07-04
| | | | | | | | | | | | | | | The cfbot and some BF animals are complaining about the previous read_binary_file commit because of ignoring return value of ‘fread’. So let's make everyone happy by testing the return value even though not strictly needed. Reported by Justin Pryzby, and suggested patch by Tom Lane. Backpatched to v11 same as the previous commit. Reported-By: Justin Pryzby Reviewed-By: Tom Lane Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com Backpatch-through: 11
* Read until EOF vice stat-reported size in read_binary_fileJoe Conway2020-07-04
| | | | | | | | | | | | | | | | | | read_binary_file(), used by SQL functions pg_read_file() and friends, uses stat to determine file length to read, when not passed an explicit length as an argument. This is problematic, for example, if the file being read is a virtual file with a stat-reported length of zero. Arrange to read until EOF, or StringInfo data string lenth limit, is reached instead. Original complaint and patch by me, with significant review, corrections, advice, and code optimizations by Tom Lane. Backpatched to v11. Prior to that only paths relative to the data and log dirs were allowed for files, so no "zero length" files were reachable anyway. Reviewed-By: Tom Lane Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com Backpatch-through: 11
* Add +(pg_lsn,numeric) and -(pg_lsn,numeric) operators.Fujii Masao2020-06-30
| | | | | | | | | | | By using these operators, the number of bytes can be added into and subtracted from LSN. Bump catalog version. Author: Fujii Masao Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Asif Rehman Discussion: https://postgr.es/m/ed9f7f74-e996-67f8-554a-52ebd3779b3b@oss.nttdata.com
* Mop up some no-longer-necessary hacks around printf %.*s format.Tom Lane2020-06-29
| | | | | | | | | | | | | | | | | | Commit 54cd4f045 added some kluges to work around an old glibc bug, namely that %.*s could misbehave if glibc thought any characters in the supplied string were incorrectly encoded. Now that we use our own snprintf.c implementation, we need not worry about that bug (even if it still exists in the wild). Revert a couple of particularly ugly hacks, and remove or improve assorted comments. Note that there can still be encoding-related hazards here: blindly clipping at a fixed length risks producing wrongly-encoded output if the clip splits a multibyte character. However, code that's doing correct multibyte-aware clipping doesn't really need a comment about that, while code that isn't needs an explanation why not, rather than a red-herring comment about an obsolete bug. Discussion: https://postgr.es/m/279428.1593373684@sss.pgh.pa.us
* Avoid using %c printf format for potentially non-ASCII characters.Tom Lane2020-06-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since %c only passes a C "char" to printf, it's incapable of dealing with multibyte characters. Passing just the first byte of such a character leads to an output string that is visibly not correctly encoded, resulting in undesirable behavior such as encoding conversion failures while sending error messages to clients. We've lived with this issue for a long time because it was inconvenient to avoid in a portable fashion. However, now that we always use our own snprintf code, it's reasonable to use the %.*s format to print just one possibly-multibyte character in a string. (We previously avoided that obvious-looking answer in order to work around glibc's bug #6530, cf commits 54cd4f045 and ed437e2b2.) Hence, run around and fix a bunch of places that used %c to report a character found in a user-supplied string. For simplicity, I did not touch places that were emitting non-user-facing debug messages, or reporting catalog data that should always be ASCII. (It's also unclear how useful this approach could be in frontend code, where it's less certain that we know what encoding we're dealing with.) In passing, improve a couple of poorly-written error messages in pageinspect/heapfuncs.c. This is a longstanding issue, but I'm hesitant to back-patch because of the impact on translatable message strings. In any case this fix would not work reliably before v12. Tom Lane and Quan Zongliang Discussion: https://postgr.es/m/a120087c-4c88-d9d4-1ec5-808d7a7f133d@gmail.com
* Disallow factorial of negative numbersPeter Eisentraut2020-06-18
| | | | | | | The previous implementation returned 1 for all negative numbers, which is not sensible under any definition. Discussion: https://www.postgresql.org/message-id/flat/6ce1df0e-86a3-e544-743a-f357ff663f68%402ndquadrant.com
* In dpow(), remove redundant check for whether y is an integer.Tom Lane2020-06-16
| | | | | | | | | | | | | I failed to notice that we don't really need to check for y being an integer in the code path where x = -inf; we already did. Also make some further cosmetic rearrangements in that spot in hopes of dodging the seeming compiler bug that buildfarm member fossa is hitting. And be consistent about declaring variables as "float8" not "double", since the pre-existing variables in this function are like that. Discussion: https://postgr.es/m/E1jkyFX-0005RR-1Q@gemulon.postgresql.org
* Fix power() for large inputs yet more.Tom Lane2020-06-15
| | | | | | | | | | | | | | | | | Buildfarm results for commit e532b1d57 reveal the error in my thinking about the unexpected-EDOM case. I'd supposed this was no longer really a live issue, but it seems the fix for glibc's bug #3866 is not all that old, and we still have at least one buildfarm animal (lapwing) with the bug. Hence, resurrect essentially the previous logic (but, I hope, less opaquely presented), and explain what it is we're really doing here. Also, blindly try to fix fossa's failure by tweaking the logic that figures out whether y is an odd integer when x is -inf. This smells a whole lot like a compiler bug, but I lack access to icc to try to pin it down. Maybe doing division instead of multiplication will dodge the issue. Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
* Fix power() for infinity inputs some more.Tom Lane2020-06-15
| | | | | | | | | | | | | | | | | | Buildfarm results for commit decbe2bfb show that AIX and illumos have non-POSIX-compliant pow() functions, as do ancient NetBSD and HPUX releases. While it's dubious how much we should care about the latter two platforms, the former two are probably enough reason to put in manual handling of infinite-input cases. Hence, do so, and clean up the post-pow() error handling to reflect its now-more-limited scope. (Notably, while we no longer expect to ever see EDOM from pow(), report it as a domain error if we do. The former coding had the net effect of expensively converting the error to ERANGE, which seems highly questionable: if pow() wanted to report ERANGE, it would have done so.) Patch by me; thanks to Michael Paquier for review. Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
* Fix some comments referring to past featuresMichael Paquier2020-06-15
| | | | | | | | Timestamp can only be an int64 since b9d092c, and support for WITH OIDS has been removed as of 578b229. Author: Justin Pryzby Discussion: https://postgr.es/m/20200612023709.GC14879@telsasoft.com
* Fix behavior of exp() and power() for infinity inputs.Tom Lane2020-06-14
| | | | | | | | | | | | | | | | | | | | | Previously, these functions tended to throw underflow errors for negative-infinity exponents. The correct thing per POSIX is to return 0, so let's do that instead. (Note that the SQL standard is silent on such issues, as it lacks the concepts of either Inf or NaN; so our practice is to follow POSIX whenever a corresponding C-library function exists.) Also, add a bunch of test cases verifying that exp() and power() actually do follow POSIX for Inf and NaN inputs. While this patch should guarantee that exp() passes the tests, power() will not unless the platform's pow(3) is fully POSIX-compliant. I already know that gaur fails some of the tests, and I am suspicious that the Windows animals will too; the extent of compliance of other old platforms remains to be seen. We might choose to drop failing test cases, or to work harder at overriding pow(3) for these cases, but first let's see just how good or bad the situation is. Discussion: https://postgr.es/m/582552.1591917752@sss.pgh.pa.us
* Sync behavior of var_samp and stddev_samp for single NaN inputs.Tom Lane2020-06-13
| | | | | | | | | | | | | | | | | | | | var_samp(numeric) and stddev_samp(numeric) disagreed with their float cousins about what to do for a single non-null input value that is NaN. The float versions return NULL on the grounds that the calculation is only defined for more than one non-null input, which seems like the right answer. But the numeric versions returned NaN, as a result of dealing with edge cases in the wrong order. Fix that. The patch also gets rid of an insignificant memory leak in such cases. This inconsistency is of long standing, but on the whole it seems best not to back-patch the change into stable branches; nobody's complained and it's such an obscure point that nobody's likely to complain. (Note that v13 and v12 now contain test cases that will notice if we accidentally back-patch this behavior change in future.) Report and patch by me; thanks to Dean Rasheed for review. Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
* Fix behavior of float aggregates for single Inf or NaN inputs.Tom Lane2020-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When there is just one non-null input value, and it is infinity or NaN, aggregates such as stddev_pop and covar_pop should produce a NaN result, because the calculation is not well-defined. They used to do so, but since we adopted Youngs-Cramer aggregation in commit e954a727f, they produced zero instead. That's an oversight, so fix it. Add tests exercising these edge cases. Affected aggregates are var_pop(double precision) stddev_pop(double precision) var_pop(real) stddev_pop(real) regr_sxx(double precision,double precision) regr_syy(double precision,double precision) regr_sxy(double precision,double precision) regr_r2(double precision,double precision) regr_slope(double precision,double precision) regr_intercept(double precision,double precision) covar_pop(double precision,double precision) corr(double precision,double precision) Back-patch to v12 where the behavior change was accidentally introduced. Report and patch by me; thanks to Dean Rasheed for review. Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
* Have pg_itoa, pg_ltoa and pg_lltoa return the length of the stringDavid Rowley2020-06-13
| | | | | | | | | | | Core by no means makes excessive use of these functions, but quite a large number of those usages do require the caller to call strlen() on the returned string. This is quite wasteful since these functions do already have a good idea of the length of the string, so we might as well just have them return that. Reviewed-by: Andrew Gierth Discussion: https://postgr.es/m/CAApHDvrm2A5x2uHYxsqriO2cUaGcFvND%2BksC9e7Tjep0t2RK_A%40mail.gmail.com
* Add missing extern keyword for a couple of numutils functionsDavid Rowley2020-06-13
| | | | | | | | | In passing, also remove a few surplus empty lines from pg_ltoa and pg_ulltoa_n in numutils.c Reported-by: Andrew Gierth Discussion: https://postgr.es/m/87y2ou3xuh.fsf@news-spur.riddles.org.uk Backpatch-through: 13, where these changes were introduced
* Make more use of RELKIND_HAS_STORAGE()Peter Eisentraut2020-06-12
| | | | | | | | Make use of RELKIND_HAS_STORAGE() where appropriate, instead of listing out the relkinds individually. No behavior change intended. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/7a22bf51-2480-d999-1794-191ba67ff47c%402ndquadrant.com
* Fix mishandling of NaN counts in numeric_[avg_]combine.Tom Lane2020-06-11
| | | | | | | | | | | | | | | | | | | | | | When merging two NumericAggStates, the code missed adding the new state's NaNcount unless its N was also nonzero; since those counts are independent, this is wrong. This would only have visible effect if some partial aggregate scans found only NaNs while earlier ones found only non-NaNs; then we could end up falsely deciding that there were no NaNs and fail to return a NaN final result as expected. That's pretty improbable, so it's no surprise this hasn't been reported from the field. Still, it's a bug. I didn't try to produce a regression test that would show the bug, but I did notice that these functions weren't being reached at all in our regression tests, so I improved the tests to at least exercise them. With these additions, I see pretty complete code coverage on the aggregation-related functions in numeric.c. Back-patch to 9.6 where this code was introduced. (I only added the improved test case as far back as v10, though, since the relevant part of aggregates.sql isn't there at all in 9.6.)
* Fix invalid function references in a few commentsDavid Rowley2020-06-09
| | | | | | | These appear to have been forgotten when the functions were renamed in 1fd687a03. Backpatch-through: 13, where the functions were renamed
* Improve ineq_histogram_selectivity's behavior for non-default orderings.Tom Lane2020-06-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ineq_histogram_selectivity() can be invoked in situations where the ordering we care about is not that of the column's histogram. We could be considering some other collation, or even more drastically, the query operator might not agree at all with what was used to construct the histogram. (We'll get here for anything using scalarineqsel-based estimators, so that's quite likely to happen for extension operators.) Up to now we just ignored this issue and assumed we were dealing with an operator/collation whose sort order exactly matches the histogram, possibly resulting in junk estimates if the binary search gets confused. It's past time to improve that, since the use of nondefault collations is increasing. What we can do is verify that the given operator and collation match what's recorded in pg_statistic, and use the existing code only if so. When they don't match, instead execute the operator against each histogram entry, and take the fraction of successes as our selectivity estimate. This gives an estimate that is probably good to about 1/histogram_size, with no assumptions about ordering. (The quality of the estimate is likely to degrade near the ends of the value range, since the two orderings probably don't agree on what is an extremal value; but this is surely going to be more reliable than what we did before.) At some point we might further improve matters by storing more than one histogram calculated according to different orderings. But this code would still be good fallback logic when no matches exist, so that is not an argument for not doing this. While here, also improve get_variable_range() to deal more honestly with non-default collations. This isn't back-patchable, because it requires adding another argument to ineq_histogram_selectivity, and because it might have significant impact on the estimation results for extension operators relying on scalarineqsel --- mostly for the better, one hopes, but in any case destabilizing plan choices in back branches is best avoided. Per investigation of a report from James Lucas. Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
* Use query collation, not column's collation, while examining statistics.Tom Lane2020-06-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5e0928005 changed the planner so that, instead of blindly using DEFAULT_COLLATION_OID when invoking operators for selectivity estimation, it would use the collation of the column whose statistics we're considering. This was recognized as still being not quite the right thing, but it seemed like a good incremental improvement. However, shortly thereafter we introduced nondeterministic collations, and that creates cases where operators can fail if they're passed the wrong collation. We don't want planning to fail in cases where the query itself would work, so this means that we *must* use the query's collation when invoking operators for estimation purposes. The only real problem this creates is in ineq_histogram_selectivity, where the binary search might produce a garbage answer if we perform comparisons using a different collation than the column's histogram is ordered with. However, when the query's collation is significantly different from the column's default collation, the estimate we previously generated would be pretty irrelevant anyway; so it's not clear that this will result in noticeably worse estimates in practice. (A follow-on patch will improve this situation in HEAD, but it seems too invasive for back-patch.) The patch requires changing the signatures of mcv_selectivity and allied functions, which are exported and very possibly are used by extensions. In HEAD, I just did that, but an API/ABI break of this sort isn't acceptable in stable branches. Therefore, in v12 the patch introduces "mcv_selectivity_ext" and so on, with signatures matching HEAD, and makes the old functions into wrappers that assume DEFAULT_COLLATION_OID should be used. That does not match the prior behavior, but it should avoid risk of failure in most cases. (In practice, I think most extension datatypes aren't collation-aware, so the change probably doesn't matter to them.) Per report from James Lucas. Back-patch to v12 where the problem was introduced. Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
* Reject "23:59:60.nnn" in datetime input.Tom Lane2020-06-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It's intentional that we don't allow values greater than 24 hours, while we do allow "24:00:00" as well as "23:59:60" as inputs. However, the range check was miscoded in such a way that it would accept "23:59:60.nnn" with a nonzero fraction. For time or timetz, the stored result would then be greater than "24:00:00" which would fail dump/reload, not to mention possibly confusing other operations. Fix by explicitly calculating the result and making sure it does not exceed 24 hours. (This calculation is redundant with what will happen later in tm2time or tm2timetz. Maybe someday somebody will find that annoying enough to justify refactoring to avoid the duplication; but that seems too invasive for a back-patched bug fix, and the cost is probably unmeasurable anyway.) Note that this change also rejects such input as the time portion of a timestamp(tz) value. Back-patch to v10. The bug is far older, but to change this pre-v10 we'd need to ensure that the logic behaves sanely with float timestamps, which is possibly nontrivial due to roundoff considerations. Doesn't really seem worth troubling with. Per report from Christoph Berg. Discussion: https://postgr.es/m/20200520125807.GB296739@msg.df7cb.de
* Fix use-after-release mistake in currtid() and currtid2() for viewsMichael Paquier2020-06-01
| | | | | | | | | | This issue has been present since the introduction of this code as of a3519a2 from 2002, and has been found by buildfarm member prion that uses RELCACHE_FORCE_RELEASE via the tests introduced recently in e786be5. Discussion: https://postgr.es/m/20200601022055.GB4121@paquier.xyz Backpatch-through: 9.5
* Fix crashes with currtid() and currtid2()Michael Paquier2020-06-01
| | | | | | | | | | | | | | | | | | | | | | | A relation that has no storage initializes rd_tableam to NULL, which caused those two functions to crash because of a pointer dereference. Note that in 11 and older versions, this has always failed with a confusing error "could not open file". These two functions are used by the Postgres ODBC driver, which requires them only when connecting to a backend strictly older than 8.1. When connected to 8.2 or a newer version, the driver uses a RETURNING clause instead whose support has been added in 8.2, so it should be possible to just remove both functions in the future. This is left as an issue to address later. While on it, add more regression tests for those functions as we never really had coverage for them, and for aggregates of TIDs. Reported-by: Jaime Casanova, via sqlsmith Author: Michael Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/CAJGNTeO93u-5APMga6WH41eTZ3Uee9f3s8dCpA-GSSqNs1b=Ug@mail.gmail.com Backpatch-through: 12
* Add CHECK_FOR_INTERRUPTS() to the repeat() functionJoe Conway2020-05-28
| | | | | | | | | | | The repeat() function loops for potentially a long time without ever checking for interrupts. This prevents, for example, a query cancel from interrupting until the work is all done. Fix by inserting a CHECK_FOR_INTERRUPTS() into the loop. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/flat/8692553c-7fe8-17d9-cbc1-7cddb758f4c6%40joeconway.com