aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix minor typo in dsa.c.Thomas Munro2018-11-29
| | | | | Author: Takeshi Ideriha Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F3BF22D%40G01JPEXMBKW04
* Fix handling of synchronous replication for stopping WAL sendersMichael Paquier2018-11-29
| | | | | | | | | | | | | | | | | | | This fixes an oversight from c6c3334 which forgot that if a subset of WAL senders are stopping and in a sync state, other WAL senders could still be waiting for a WAL position to be synced while committing a transaction. However the subset of stopping senders would not release waiters, potentially breaking synchronous replication guarantees. This commit makes sure that even WAL senders stopping are able to release waiters and are tracked properly. On 9.4, this can also trigger an assertion failure when setting for example max_wal_senders to 1 where a WAL sender is not able to find itself as in synchronous state when the instance stops. Reported-by: Paul Guo Author: Paul Guo, Michael Paquier Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com Backpatch-through: 9.4
* Have BufFileSize() ereport() on FileSize() failure.Peter Geoghegan2018-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the responsibility for checking for and reporting a failure from the only current BufFileSize() caller, logtape.c, to BufFileSize() itself. Code within buffile.c is generally responsible for interfacing with fd.c to report irrecoverable failures. This seems like a convention that's worth sticking to. Reorganizing things this way makes it easy to make the error message raised in the event of BufFileSize() failure descriptive of the underlying problem. We're now clear on the distinction between temporary file name and BufFile name, and can show errno, confident that its value actually relates to the error being reported. In passing, an existing, similar buffile.c ereport() + errcode_for_file_access() site is changed to follow the same conventions. The API of the function BufFileSize() is changed by this commit, despite already being in a stable release (Postgres 11). This seems acceptable, since the BufFileSize() ABI was changed by commit aa551830421, which hasn't made it into a point release yet. Besides, it's difficult to imagine a third party BufFileSize() caller not just raising an error anyway, since BufFile state should be considered corrupt when BufFileSize() fails. Per complaint from Tom Lane. Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us Backpatch: 11-, where shared BufFiles were introduced.
* Only allow one recovery target settingPeter Eisentraut2018-11-28
| | | | | | | | | | | | | | | The previous recovery.conf regime accepted multiple recovery_target* settings and used the last one. This does not translate well to the general GUC system. Specifically, under EXEC_BACKEND, the settings are written out not in any particular order, so the order in which they were originally set is not available to new processes. Rather than redesign the GUC system, it was decided to abandon the old behavior and only allow one recovery target setting. A second setting will cause an error. However, it is allowed to set the same parameter multiple times or unset a parameter and set a different one. Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
* Don't set PAM_RHOST for Unix sockets.Thomas Munro2018-11-28
| | | | | | | | | | | | | | | Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix sockets. This caused Linux PAM's libaudit integration to make DNS requests for that name. It's not exactly clear what value PAM_RHOST should have in that case, but it seems clear that we shouldn't set it to an unresolvable name, so don't do that. Back-patch to 9.6. Bug #15520. Author: Thomas Munro Reviewed-by: Peter Eisentraut Reported-by: Albert Schabhuetl Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
* Do not decode TOAST data for table rewritesTomas Vondra2018-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged using XLOG / FPI records, and thus (correctly) ignored in decoding. But the associated TOAST table is WAL-logged as plain INSERT records, and so was logically decoded and passed to reorder buffer. That has severe consequences with TOAST tables of non-trivial size. Firstly, reorder buffer has to keep all those changes, possibly spilling them to a file, incurring I/O costs and disk space. Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into a hash table, which got discarded only after processing the row from the main heap. But as the main heap is not decoded for rewrites, this never happened, so all the TOAST data accumulated in memory, resulting either in excessive memory consumption or OOM. The fix is simple, as commit e9edc1ba already introduced infrastructure (namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST tables, but it only applied it to system tables. So simply use it for all TOAST data in raw_heap_insert(). That would however solve only the memory consumption issue - the TOAST changes would still be decoded and added to the reorder buffer, and spilled to disk (although without TOAST tuple data, so much smaller). But we can solve that by tweaking DecodeInsert() to just ignore such INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag, instead of skipping them later in ReorderBufferCommit(). Review: Masahiko Sawada Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com Backpatch: 9.4-, where logical decoding was introduced
* Don't count zero-filled buffers as 'read' in EXPLAIN.Thomas Munro2018-11-28
| | | | | | | | | | | | | If you extend a relation, it should count as a block written, not read (we write a zero-filled block). If you ask for a zero-filled buffer, it shouldn't be counted as read or written. Later we might consider counting zero-filled buffers with a separate counter, if they become more common due to future work. Author: Thomas Munro Reviewed-by: Haribabu Kommi, Kyotaro Horiguchi, David Rowley Discussion: https://postgr.es/m/CAEepm%3D3JytB3KPpvSwXzkY%2Bdwc5zC8P8Lk7Nedkoci81_0E9rA%40mail.gmail.com
* Fix jit compilation bug on wide tables.Andres Freund2018-11-27
| | | | | | | | | | | | | | | | | | | | The function generated to perform JIT compiled tuple deforming failed when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd failed to realize that LLVM's getelementptr would treat an int8 index argument as signed, rather than unsigned. That means that a hoff larger than 127 would result in a negative offset being applied. Fix that by widening the index to 32bit. Add a testcase with a wide table. Don't drop it, as it seems useful to verify other tools deal properly with wide tables. Thanks to Justin Pryzby for both reporting a bug and then reducing it to a reproducible testcase! Reported-By: Justin Pryzby Author: Andres Freund Discussion: https://postgr.es/m/20181115223959.GB10913@telsasoft.com Backpatch: 11, just as jit compilation was
* Integrate recovery.conf into postgresql.confPeter Eisentraut2018-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | recovery.conf settings are now set in postgresql.conf (or other GUC sources). Currently, all the affected settings are PGC_POSTMASTER; this could be refined in the future case by case. Recovery is now initiated by a file recovery.signal. Standby mode is initiated by a file standby.signal. The standby_mode setting is gone. If a recovery.conf file is found, an error is issued. The trigger_file setting has been renamed to promote_trigger_file as part of the move. The documentation chapter "Recovery Configuration" has been integrated into "Server Configuration". pg_basebackup -R now appends settings to postgresql.auto.conf and creates a standby.signal file. Author: Fujii Masao <masao.fujii@gmail.com> Author: Simon Riggs <simon@2ndquadrant.com> Author: Abhijit Menon-Sen <ams@2ndquadrant.com> Author: Sergei Kornilov <sk@zsrv.org> Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
* Fix assertion failure for SSL connections.Thomas Munro2018-11-25
| | | | | | | | | | Commit cfdf4dc4 added an assertion that every WaitLatch() or similar handles postmaster death. One place did not, but was missed in review and testing due to the need for an SSL connection. Fix, by asking for WL_EXIT_ON_PM_DEATH. Reported-by: Christoph Berg Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
* Fix float-to-integer coercions to handle edge cases correctly.Tom Lane2018-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | ftoi4 and its sibling coercion functions did their overflow checks in a way that looked superficially plausible, but actually depended on an assumption that the MIN and MAX comparison constants can be represented exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8, and dtoi8, resulting in a possibility that values near the MAX limit will be wrongly converted (to negative values) when they need to be rejected. Also, because we compared before rounding off the fractional part, the other three functions threw errors for values that really ought to get rounded to the min or max integer value. Fix by doing rint() first (requiring an assumption that it handles NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already), and by comparing to values that should coerce to float exactly, namely INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies between these six functions. Per bug #15519 from Victor Petrovykh. This should get back-patched, but first let's see what the buildfarm thinks of it --- I'm not too sure about portability of some of the regression test cases. Patch by me; thanks to Andrew Gierth for analysis and discussion. Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
* Clamp semijoin selectivity to be not more than inner-join selectivity.Tom Lane2018-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should never estimate the output of a semijoin to be more rows than we estimate for an inner join with the same input rels and join condition; it's obviously impossible for that to happen. However, given the relatively poor quality of our semijoin selectivity estimates --- particularly, but not only, in cases where we punt and return a default estimate --- we did often deliver such estimates. To improve matters, calculate both estimates inside eqjoinsel() and take the smaller one. The bulk of this patch is just mechanical refactoring to avoid repetitive information lookup when we call both eqjoinsel_semi and eqjoinsel_inner. The actual new behavior is just selec = Min(selec, inner_rel->rows * selec_inner); which looks a bit odd but is correct because of our different definitions for inner and semi join selectivity. There is one ensuing plan change in the regression tests, but it looks reasonable enough (and checking the actual row counts shows that the estimate moved closer to reality, not further away). Per bug #15160 from Alexey Ermakov. Although this is arguably a bug fix, I won't risk destabilizing plan choices in stable branches by back-patching. Tom Lane, reviewed by Melanie Plageman Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
* Silence compiler warningsAlvaro Herrera2018-11-23
| | | | | | | Commit cfdf4dc4fc96 left a few unnecessary assignments, one of which caused compiler warnings, as reported by Erik Rijkers. Remove them all. Discussion: https://postgr.es/m/df0dcca2025b3d90d946ecc508ca9678@xs4all.nl
* Don't allow partitioned indexes in pg_global tablespaceAlvaro Herrera2018-11-23
| | | | | | | Missing in dfa608141982. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f-M3NMTCpv=vDfkoqHbMPFf=3-Z1ud=+1DHH00tC+zLaQ@mail.gmail.com
* Add WL_EXIT_ON_PM_DEATH pseudo-event.Thomas Munro2018-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Users of the WaitEventSet and WaitLatch() APIs can now choose between asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death. This reduces code duplication, since almost all callers want the latter. Repair all code that was previously ignoring postmaster death completely, or requesting the event but ignoring it, or requesting the event but then doing an unconditional PostmasterIsAlive() call every time through its event loop (which is an expensive syscall on platforms for which we don't have USE_POSTMASTER_DEATH_SIGNAL support). Assert that callers of WaitLatchXXX() under the postmaster remember to ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent future bugs. The only process that doesn't handle postmaster death is syslogger. It waits until all backends holding the write end of the syslog pipe (including the postmaster) have closed it by exiting, to be sure to capture any parting messages. By using the WaitEventSet API directly it avoids the new assertion, and as a by-product it may be slightly more efficient on platforms that have epoll(). Author: Thomas Munro Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com, https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
* Fix another crash in json{b}_populate_recordset and json{b}_to_recordset.Tom Lane2018-11-22
| | | | | | | | | | | | | | | populate_recordset_worker() failed to consider the possibility that the supplied JSON data contains no rows, so that update_cached_tupdesc never got called. This led to a null-pointer dereference since commit 9a5e8ed28; before that it led to a bogus "set-valued function called in context that cannot accept a set" error. Fix by forcing the update to happen. Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited about the bogus error, we could perhaps go back further, but it'd take more work to figure out how to fix it in older branches. Given the lack of field complaints about that aspect, I'm not excited.) Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
* Fix typo in description of ExecFindPartitionMichael Paquier2018-11-22
| | | | | Author: Amit Langote Discussion: https://postgr.es/m/CA+HiwqHg0=UL+Dhh3gpiwYNA=ufk9Lb7GQ2c=5rs=ZmVTP7xAw@mail.gmail.com
* Fix typo in commit 6f7d02aa60b7Alvaro Herrera2018-11-21
| | | | Per pink buildfarm.
* Fix PartitionDispatchData vertical whitespaceAlvaro Herrera2018-11-21
| | | | | Per David Rowley Discussion: https://postgr.es/m/CAKJS1f-MstvBWdkOzACsOHyBgj2oXcBM8kfv+NhVe-Ux-wq9Sg@mail.gmail.com
* instr_time.h: add INSTR_TIME_SET_CURRENT_LAZYAlvaro Herrera2018-11-21
| | | | | | | | Sets the timestamp to current if not already set. Will acquire more callers momentarily. Author: Fabien Coelho Discussion: https://postgr.es/m/alpine.DEB.2.21.1808111104320.1705@lancre
* Remove WITH OIDS support, change oid catalog column visibility.Andres Freund2018-11-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
* Make detection of SSL_CTX_set_min_proto_version more portablePeter Eisentraut2018-11-20
| | | | | | | As already explained in configure.in, using the OpenSSL version number to detect presence of functions doesn't work, because LibreSSL reports incompatible version numbers. Fortunately, the functions we need here are actually macros, so we can just test for them directly.
* Add settings to control SSL/TLS protocol versionPeter Eisentraut2018-11-20
| | | | | | | | | | For example: ssl_min_protocol_version = 'TLSv1.1' ssl_max_protocol_version = 'TLSv1.2' Reviewed-by: Steve Singer <steve@ssinger.info> Discussion: https://www.postgresql.org/message-id/flat/1822da87-b862-041a-9fc2-d0310c3da173@2ndquadrant.com
* Make WAL description output more consistentPeter Eisentraut2018-11-20
| | | | | | | | | | The output for record types XLOG_DBASE_CREATE and XLOG_DBASE_DROP used the order dbid/tablespaceid, whereas elsewhere the order is tablespaceid/dbid[/relfilenodeid]. Flip the order for those two types to make it consistent. Author: Jean-Christophe Arnu <jcarnu@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAHZmTm18Ln62KW-G8NYvO1wbBL3QU1E76Zep=DuHmg-zS2XFAg@mail.gmail.com/
* Refine some guc.c help textsPeter Eisentraut2018-11-20
| | | | | | | These settings apply to communication with the sending server, which is not necessarily a primary. Author: Sergei Kornilov <sk@zsrv.org>
* Add needed #include.Tom Lane2018-11-19
| | | | | | | | Per POSIX, WIFSIGNALED and related macros are provided by <sys/wait.h>. Apparently on Linux they're also pulled in by some other inclusions, but BSD-ish systems are pickier. Fixes portability issue in ffa4cbd62. Per buildfarm.
* Handle EPIPE more sanely when we close a pipe reading from a program.Tom Lane2018-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, any program launched by COPY TO/FROM PROGRAM inherited the server's setting of SIGPIPE handling, i.e. SIG_IGN. Hence, if we were doing COPY FROM PROGRAM and closed the pipe early, the child process would see EPIPE on its output file and typically would treat that as a fatal error, in turn causing the COPY to report error. Similarly, one could get a failure report from a query that didn't read all of the output from a contrib/file_fdw foreign table that uses file_fdw's PROGRAM option. To fix, ensure that child programs inherit SIG_DFL not SIG_IGN processing of SIGPIPE. This seems like an all-around better situation since if the called program wants some non-default treatment of SIGPIPE, it would expect to have to set that up for itself. Then in COPY, if it's COPY FROM PROGRAM and we stop reading short of detecting EOF, treat a SIGPIPE exit from the called program as a non-error condition. This still allows us to report an error for any case where the called program gets SIGPIPE on some other file descriptor. As coded, we won't report a SIGPIPE if we stop reading as a result of seeing an in-band EOF marker (e.g. COPY BINARY EOF marker). It's somewhat debatable whether we should complain if the called program continues to transmit data after an EOF marker. However, it seems like we should avoid throwing error in any questionable cases, especially in a back-patched fix, and anyway it would take additional code to make such an error get reported consistently. Back-patch to v10. We could go further back, since COPY FROM PROGRAM has been around awhile, but AFAICS the only way to reach this situation using core or contrib is via file_fdw, which has only supported PROGRAM sources since v10. The COPY statement per se has no feature whereby it'd stop reading without having hit EOF or an error already. Therefore, I don't see any upside to back-patching further that'd outweigh the risk of complaints about behavioral change. Per bug #15449 from Eric Cyr. Patch by me, review by Etsuro Fujita and Kyotaro Horiguchi Discussion: https://postgr.es/m/15449-1cf737dd5929450e@postgresql.org
* Reduce unnecessary list construction in RelationBuildPartitionDesc.Robert Haas2018-11-19
| | | | | | | | | | | | | | | The 'partoids' list which was constructed by the previous version of this code was necessarily identical to 'inhoids'. There's no point to duplicating the list, so avoid that. Instead, construct the array representation directly from the original 'inhoids' list. Also, use an array rather than a list for 'boundspecs'. We know exactly how many items we need to store, so there's really no reason to use a list. Using an array instead reduces the number of memory allocations we perform. Patch by me, reviewed by Michael Paquier and Amit Langote, the latter of whom also helped with rebasing.
* Disallow COPY FREEZE on partitioned tablesAlvaro Herrera2018-11-19
| | | | | | | | | | | | | | | This didn't actually work: COPY would fail to flush the right files, and instead would try to flush a non-existing file, causing the whole transaction to fail. Cope by raising an error as soon as the command is sent instead, to avoid a nasty later surprise. Of course, it would be much better to make it work, but we don't have a patch for that yet, and we don't know if we'll want to backpatch one when we do. Reported-by: Tomas Vondra Author: David Rowley Reviewed-by: Amit Langote, Steve Singer, Tomas Vondra
* PANIC on fsync() failure.Thomas Munro2018-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some operating systems, it doesn't make sense to retry fsync(), because dirty data cached by the kernel may have been dropped on write-back failure. In that case the only remaining copy of the data is in the WAL. A subsequent fsync() could appear to succeed, but not have flushed the data. That means that a future checkpoint could apparently complete successfully but have lost data. Therefore, violently prevent any future checkpoint attempts by panicking on the first fsync() failure. Note that we already did the same for WAL data; this change extends that behavior to non-temporary data files. Provide a GUC data_sync_retry to control this new behavior, for users of operating systems that don't eject dirty data, and possibly forensic/testing uses. If it is set to on and the write-back error was transient, a later checkpoint might genuinely succeed (on a system that does not throw away buffers on failure); if the error is permanent, later checkpoints will continue to fail. The GUC defaults to off, meaning that we panic. Back-patch to all supported releases. There is still a narrow window for error-loss on some operating systems: if the file is closed and later reopened and a write-back error occurs in the intervening time, but the inode has the bad luck to be evicted due to memory pressure before we reopen, we could miss the error. A later patch will address that with a scheme for keeping files with dirty data open at all times, but we judge that to be too complicated to back-patch. Author: Craig Ringer, with some adjustments by Thomas Munro Reported-by: Craig Ringer Reviewed-by: Robert Haas, Thomas Munro, Andres Freund Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
* Don't forget about failed fsync() requests.Thomas Munro2018-11-19
| | | | | | | | | | | | If fsync() fails, md.c must keep the request in its bitmap, so that future attempts will try again. Back-patch to all supported releases. Author: Thomas Munro Reviewed-by: Amit Kapila Reported-by: Andrew Gierth Discussion: https://postgr.es/m/87y3i1ia4w.fsf%40news-spur.riddles.org.uk
* Remove unnecessary memcpy when reading WAL record fitting on pageMichael Paquier2018-11-19
| | | | | | | | | | | | | When reading a WAL record, its contents are copied into an intermediate buffer. However, doing so is not necessary if the record fits fully into the current page, saving one memcpy for each such record. The allocation handling of the intermediate buffer is also now done only when a record crosses a page boundary, shaving some extra cycles when reading a WAL record. Author: Andrey Lepikhov Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas Discussion: https://postgr.es/m/c2ea54dd-a1d3-80eb-ddbf-7e6f258e615e@postgrespro.ru
* Avoid defining SIGTTIN/SIGTTOU on Windows.Tom Lane2018-11-17
| | | | | | | | | | | Setting them to SIG_IGN seems unlikely to have any beneficial effect on that platform, and given the signal numbering collision with SIGABRT, it could easily have bad effects. Given the lack of field complaints that can be traced to this, I don't presently feel a need to back-patch. Discussion: https://postgr.es/m/5627.1542477392@sss.pgh.pa.us
* Leave SIGTTIN/SIGTTOU signal handling alone in postmaster child processes.Tom Lane2018-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | For reasons lost in the mists of time, most postmaster child processes reset SIGTTIN/SIGTTOU signal handling to SIG_DFL, with the major exception that backend sessions do not. It seems like a pretty bad idea for any postmaster children to do that: if stderr is connected to the terminal, and the user has put the postmaster in background, any log output would result in the child process freezing up. Hence, switch them all to doing what backends do, ie, nothing. This allows them to inherit the postmaster's SIG_IGN setting. On the other hand, manually-launched processes such as standalone backends will have default processing, which seems fine. In passing, also remove useless resets of SIGCONT and SIGWINCH signal processing. Perhaps the postmaster once changed those to something besides SIG_DFL, but it doesn't now, so these are just wasted (and confusing) syscalls. Basically, this propagates the changes made in commit 8e2998d8a from backends to other postmaster children. Probably the only reason these calls now exist elsewhere is that I missed changing pgstat.c along with postgres.c at the time. Given the lack of field complaints that can be traced to this, I don't presently feel a need to back-patch. Discussion: https://postgr.es/m/5627.1542477392@sss.pgh.pa.us
* Fix some spurious new compiler warnings in MSVC.Andres Freund2018-11-17
| | | | | | Per buildfarm animal bowerbird. Discussion: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2018-11-17%2002%3A30%3A20
* Make TupleTableSlots extensible, finish split of existing slot type.Andres Freund2018-11-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit completes the work prepared in 1a0586de36, splitting the old TupleTableSlot implementation (which could store buffer, heap, minimal and virtual slots) into four different slot types. As described in the aforementioned commit, this is done with the goal of making tuple table slots extensible, to allow for pluggable table access methods. To achieve runtime extensibility for TupleTableSlots, operations on slots that can differ between types of slots are performed using the TupleTableSlotOps struct provided at slot creation time. That includes information from the size of TupleTableSlot struct to be allocated, initialization, deforming etc. See the struct's definition for more detailed information about callbacks TupleTableSlotOps. I decided to rename TTSOpsBufferTuple to TTSOpsBufferHeapTuple and ExecCopySlotTuple to ExecCopySlotHeapTuple, as that seems more consistent with other naming introduced in recent patches. There's plenty optimization potential in the slot implementation, but according to benchmarking the state after this commit has similar performance characteristics to before this set of changes, which seems sufficient. There's a few changes in execReplication.c that currently need to poke through the slot abstraction, that'll be repaired once the pluggable storage patchset provides the necessary infrastructure. Author: Andres Freund and Ashutosh Bapat, with changes by Amit Khandekar Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Avoid re-typedef'ing PartitionTupleRoutingAlvaro Herrera2018-11-16
| | | | | | Apparently, gcc on macOS (?) doesn't like it. Per buildfarm.
* Inline hot path of slot_getsomeattrs().Andres Freund2018-11-16
| | | | | | | | This yields a minor speedup, which roughly balances the loss from the upcoming introduction of callbacks to do some operations on slots. Author: Andres Freund Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Redesign initialization of partition routing structuresAlvaro Herrera2018-11-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This speeds up write operations (INSERT, UPDATE, DELETE, COPY, as well as the future MERGE) on partitioned tables. This changes the setup for tuple routing so that it does far less work during the initial setup and pushes more work out to when partitions receive tuples. PartitionDispatchData structs for sub-partitioned tables are only created when a tuple gets routed through it. The possibly large arrays in the PartitionTupleRouting struct have largely been removed. The partitions[] array remains but now never contains any NULL gaps. Previously the NULLs had to be skipped during ExecCleanupTupleRouting(), which could add a large overhead to the cleanup when the number of partitions was large. The partitions[] array is allocated small to start with and only enlarged when we route tuples to enough partitions that it runs out of space. This allows us to keep simple single-row partition INSERTs running quickly. Redesign The arrays in PartitionTupleRouting which stored the tuple translation maps have now been removed. These have been moved out into a PartitionRoutingInfo struct which is an additional field in ResultRelInfo. The find_all_inheritors() call still remains by far the slowest part of ExecSetupPartitionTupleRouting(). This commit just removes the other slow parts. In passing also rename the tuple translation maps from being ParentToChild and ChildToParent to being RootToPartition and PartitionToRoot. The old names mislead you into thinking that a partition of some sub-partitioned table would translate to the rowtype of the sub-partitioned table rather than the root partitioned table. Authors: David Rowley and Amit Langote, heavily revised by Álvaro Herrera Testing help from Jesper Pedersen and Kato Sho. Discussion: https://postgr.es/m/CAKJS1f_1RJyFquuCKRFHTdcXqoPX-PYqAd7nz=GVBwvGh4a6xA@mail.gmail.com
* Fix slot type assumptions for nodeGather[Merge].Andres Freund2018-11-15
| | | | | | | | | | | The assumption made in 1a0586de3657c was wrong, as evidenced by buildfarm failure on locust, which runs with force_parallel_mode=regress. The tuples accessed in either nodes are in the outer slot, and we can't trivially rely on the slot type being known because the leader might execute the subsidiary node directly, or via the tuple queue on a worker. In the latter case the tuple will always be a heaptuple slot, but in the former, it'll be whatever the subsidiary node returns.
* Don't generate tuple deforming functions for virtual slots.Andres Freund2018-11-15
| | | | | | | | | | Virtual tuple table slots never need tuple deforming. Therefore, if we know at expression compilation time, that a certain slot will always be virtual, there's no need to create a tuple deforming routine for it. Author: Andres Freund Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Verify that expected slot types match returned slot types.Andres Freund2018-11-15
| | | | | | | | | | | This is important so JIT compilation knows what kind of tuple slot the deforming routine can expect. There's also optimization potential for expression initialization without JIT compilation. It e.g. seems plausible to elide EEOP_*_FETCHSOME ops entirely when dealing with virtual slots. Author: Andres Freund Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Compute information about EEOP_*_FETCHSOME at expression init time.Andres Freund2018-11-15
| | | | | | | | | | | | | | Previously this information was computed when JIT compiling an expression. But the information is useful for assertions in the non-JIT case too (for assertions), therefore it makes sense to move it. This will, in a followup commit, allow to treat different slot types differently. E.g. for virtual slots there's no need to generate a JIT function to deform the slot. Author: Andres Freund Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Introduce notion of different types of slots (without implementing them).Andres Freund2018-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Upcoming work intends to allow pluggable ways to introduce new ways of storing table data. Accessing those table access methods from the executor requires TupleTableSlots to be carry tuples in the native format of such storage methods; otherwise there'll be a significant conversion overhead. Different access methods will require different data to store tuples efficiently (just like virtual, minimal, heap already require fields in TupleTableSlot). To allow that without requiring additional pointer indirections, we want to have different structs (embedding TupleTableSlot) for different types of slots. Thus different types of slots are needed, which requires adapting creators of slots. The slot that most efficiently can represent a type of tuple in an executor node will often depend on the type of slot a child node uses. Therefore we need to track the type of slot is returned by nodes, so parent slots can create slots based on that. Relatedly, JIT compilation of tuple deforming needs to know which type of slot a certain expression refers to, so it can create an appropriate deforming function for the type of tuple in the slot. But not all nodes will only return one type of slot, e.g. an append node will potentially return different types of slots for each of its subplans. Therefore add function that allows to query the type of a node's result slot, and whether it'll always be the same type (whether it's fixed). This can be queried using ExecGetResultSlotOps(). The scan, result, inner, outer type of slots are automatically inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(), left/right subtrees respectively. If that's not correct for a node, that can be overwritten using new fields in PlanState. This commit does not introduce the actually abstracted implementation of different kind of TupleTableSlots, that will be left for a followup commit. The different types of slots introduced will, for now, still use the same backing implementation. While this already partially invalidates the big comment in tuptable.h, it seems to make more sense to update it later, when the different TupleTableSlot implementations actually exist. Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Rejigger materializing and fetching a HeapTuple from a slot.Andres Freund2018-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | Previously materializing a slot always returned a HeapTuple. As current work aims to reduce the reliance on HeapTuples (so other storage systems can work efficiently), that needs to change. Thus split the tasks of materializing a slot (i.e. making it independent from the underlying storage / other memory contexts) from fetching a HeapTuple from the slot. For brevity, allow to fetch a HeapTuple from a slot and materializing the slot at the same time, controlled by a parameter. For now some callers of ExecFetchSlotHeapTuple, with materialize = true, expect that changes to the heap tuple will be reflected in the underlying slot. Those places will be adapted in due course, so while not pretty, that's OK for now. Also rename ExecFetchSlotTuple to ExecFetchSlotHeapTupleDatum and ExecFetchSlotTupleDatum to ExecFetchSlotHeapTupleDatum, as it's likely that future storage methods will need similar methods. There already is ExecFetchSlotMinimalTuple, so the new names make the naming scheme more coherent. Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Update executor documentation for run-time partition pruningPeter Eisentraut2018-11-15
| | | | | | | With run-time partition pruning, there is no longer necessarily an executor node for each corresponding plan node. Author: David Rowley <david.rowley@2ndquadrant.com>
* Rationalize expression context reset in ExecModifyTable().Andres Freund2018-11-15
| | | | | | | | | | | | | The current pattern of reseting expressions both in ExecProcessReturning() and ExecOnConflictUpdate() makes it harder than necessary to reason about memory lifetimes. It also requires materializing slots unnecessarily, although this patch doesn't take advantage of the fact that that's not necessary anymore. Instead reset the expression context once for each input tuple. Author: Ashutosh Bapat Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
* Improve performance of partition pruning remapping a little.Tom Lane2018-11-15
| | | | | | | | | | | | | | | | | | | | | | ExecFindInitialMatchingSubPlans has to update the PartitionPruneState's subplan mapping data to account for the removal of subplans it prunes. However, that's only necessary if run-time pruning will also occur, so we can skip it when that won't happen, which should result in not needing to do the remapping in many cases. (We now need it only when some partitions are potentially startup-time prunable and others are potentially run-time prunable, which seems like an unusual case.) Also make some marginal performance improvements in the remapping itself. These will mainly win if most partitions got pruned by the startup-time pruning, which is perhaps a debatable assumption in this context. Also fix some bogus comments, and rearrange code to marginally reduce space consumption in the executor's query-lifespan context. David Rowley, reviewed by Yoshikazu Imai Discussion: https://postgr.es/m/CAKJS1f9+m6-di-zyy4B4AGn0y1B9F8UKDRigtBbNviXOkuyOpw@mail.gmail.com
* geo_ops.c: Clarify comments and function argumentsAlvaro Herrera2018-11-15
| | | | | | | | | | | | | | These functions were not crystal clear about what their respective APIs are. Make an effort to improve that. Emre's patch was correct AFAICT, but I (Álvaro) felt the need to improve a few comments a bit more. Any resulting errors are my own. Per complaint from Coverity, Ning Yu, and Tom Lane. Author: Emre Hasegeli, Álvaro Herrera Reviewed-by: Tomas Vondra, Álvaro Herrera Discussion: https://postgr.es/m/26769.1533090136@sss.pgh.pa.us
* Further adjustment to random() seed initialization.Thomas Munro2018-11-15
| | | | | | | Per complaint from Tom Lane, don't chomp the timestamp at 32 bits, so we can shift in some of its higher bits. Discussion: https://postgr.es/m/14712.1542253115%40sss.pgh.pa.us