aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Add missing breakAlvaro Herrera2017-03-26
| | | | Noticed by Coverity
* Remove unreachable code in expression evaluation.Andres Freund2017-03-25
| | | | | | | | | The previous code still contained expression evaluation time support for CaseExprs without a defresult. But transformCaseExpr() creates a default expression if necessary. Author: Andres Freund Discussion: https://postgr.es/m/4834.1490480275@sss.pgh.pa.us
* git rm execQual.cTom Lane2017-03-25
| | | | | | Should have been in commit b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755, but passing the patch back and forth as a patch seems to have dropped that metadata.
* Faster expression evaluation and targetlist projection.Andres Freund2017-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the old, recursive tree-walk based evaluation, with non-recursive, opcode dispatch based, expression evaluation. Projection is now implemented as part of expression evaluation. This both leads to significant performance improvements, and makes future just-in-time compilation of expressions easier. The speed gains primarily come from: - non-recursive implementation reduces stack usage / overhead - simple sub-expressions are implemented with a single jump, without function calls - sharing some state between different sub-expressions - reduced amount of indirect/hard to predict memory accesses by laying out operation metadata sequentially; including the avoidance of nearly all of the previously used linked lists - more code has been moved to expression initialization, avoiding constant re-checks at evaluation time Future just-in-time compilation (JIT) has become easier, as demonstrated by released patches intended to be merged in a later release, for primarily two reasons: Firstly, due to a stricter split between expression initialization and evaluation, less code has to be handled by the JIT. Secondly, due to the non-recursive nature of the generated "instructions", less performance-critical code-paths can easily be shared between interpreted and compiled evaluation. The new framework allows for significant future optimizations. E.g.: - basic infrastructure for to later reduce the per executor-startup overhead of expression evaluation, by caching state in prepared statements. That'd be helpful in OLTPish scenarios where initialization overhead is measurable. - optimizing the generated "code". A number of proposals for potential work has already been made. - optimizing the interpreter. Similarly a number of proposals have been made here too. The move of logic into the expression initialization step leads to some backward-incompatible changes: - Function permission checks are now done during expression initialization, whereas previously they were done during execution. In edge cases this can lead to errors being raised that previously wouldn't have been, e.g. a NULL array being coerced to a different array type previously didn't perform checks. - The set of domain constraints to be checked, is now evaluated once during expression initialization, previously it was re-built every time a domain check was evaluated. For normal queries this doesn't change much, but e.g. for plpgsql functions, which caches ExprStates, the old set could stick around longer. The behavior around might still change. Author: Andres Freund, with significant changes by Tom Lane, changes by Heikki Linnakangas Reviewed-By: Tom Lane, Heikki Linnakangas Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
* Re-adhere to policy of no more than 20 tests per parallel group.Tom Lane2017-03-25
| | | | | | | | As explained at the head of parallel_schedule, we place an arbitrary limit of 20 test cases per parallel group. Commit c7a9fa399 overlooked this. Least messy solution seems to be to move the "comments" test to the next group, since it doesn't really belong in a group of datatype tests anyway.
* Remember to drop roles created by regression tests.Tom Lane2017-03-25
| | | | | | Commit e3920ac82 created "regress_subscription_user2" in subscription.sql, but forgot to drop it, causing the regression tests to fail if run twice without re-initdb'ing.
* Add cleanup to new test casesPeter Eisentraut2017-03-25
|
* Report catalog_xmin separately in hot_standby_feedbackSimon Riggs2017-03-25
| | | | | | | | | If the upstream walsender is using a physical replication slot, store the catalog_xmin in the slot's catalog_xmin field. If the upstream doesn't use a slot and has only a PGPROC entry behaviour doesn't change, as we store the combined xmin and catalog_xmin in the PGPROC entry. Author: Craig Ringer
* Add missing breakPeter Eisentraut2017-03-25
| | | | Reported-by: Mark Kirkwood <mark.kirkwood@catalyst.net.nz>
* psql: Add missing schema qualificationPeter Eisentraut2017-03-25
|
* Fix locale pointer use in WIN32 code pathPeter Eisentraut2017-03-25
| | | | Author: David Rowley <david.rowley@2ndquadrant.com>
* Remove ICU tests from default runPeter Eisentraut2017-03-25
| | | | | | | These tests require the test database to be in UTF8 encoding. Until there is a better solution, take them out of the default test set and treat them like the existing collate.linux.utf8 test, meaning it has to be selected manually.
* Fix recovery test hangPeter Eisentraut2017-03-25
| | | | | The test would hang if a sufficient ~/.psqlrc was present. Fix by using psql -X.
* Add COMMENT and SECURITY LABEL support for publications and subscriptionsPeter Eisentraut2017-03-24
|
* Make header self-containedPeter Eisentraut2017-03-24
| | | | Add necessary include files for things used in the header.
* Add more subscription DDL testsPeter Eisentraut2017-03-24
| | | | | Add more tests for various variants of subscription DDL commands, based on code coverage report. Fix a small bug discovered by that.
* Fix typo in commentAlvaro Herrera2017-03-24
|
* Fix stats_ext test in 32 bit machinesAlvaro Herrera2017-03-24
| | | | | | | | | Because tuple packing is different (because of the MAXALIGN difference), the expected costs of a seqscan is different. The commonly used trick of eliding costs in EXPLAIN output (COSTS OFF) would make the tests completely pointless. Instead, add an alternative expected file.
* Check that published table exists on subscriberPeter Eisentraut2017-03-24
| | | | Author: Petr Jelinek <pjmodos@pjmodos.net>
* Improve access to parallel query from procedural languages.Robert Haas2017-03-24
| | | | | | | | | | | | | | | | | | | | | | | | | In SQL, the ability to use parallel query was previous contingent on fcache->readonly_func, which is only set for non-volatile functions; but the volatility of a function has no bearing on whether queries inside it can use parallelism. Remove that condition. SPI_execute and SPI_execute_with_args always run the plan just once, though not necessarily to completion. Given the changes in commit 691b8d59281b5177f16fe80858df921f77a8e955, it's sensible to pass CURSOR_OPT_PARALLEL_OK here, so do that. This improves access to parallelism for any caller that uses these functions to execute queries. Such callers include plperl, plpython, pltcl, and plpgsql, though it's not the case that they all use these functions exclusively. In plpgsql, allow parallel query for plain SELECT queries (as opposed to PERFORM, which already worked) and for plain expressions (which probably won't go through the executor at all, because they will likely be simple expressions, but if they do then this helps). Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila Discussion: http://postgr.es/m/CAOGQiiMfJ+4SQwgG=6CVHWoisiU0+7jtXSuiyXBM3y=A=eJzmg@mail.gmail.com
* Fix use-after-free bugAlvaro Herrera2017-03-24
| | | | Detected by buildfarm member prion
* Reverting 42b4b0b2413b9b472aaf2112a3bbfd80a6ab4dc5Simon Riggs2017-03-24
| | | | Buildfarm issues and other reported issues
* Make VACUUM VERBOSE report the number of skipped frozen pages.Fujii Masao2017-03-25
| | | | | | | | | | | | | | | | Previously manual VACUUM did not report the number of skipped frozen pages even when VERBOSE option is specified. But this information is helpful to monitor the VACUUM activity, and also autovacuum reports that number in the log file when the condition of log_autovacuum_min_duration is met. This commit changes VACUUM VERBOSE so that it reports the number of frozen pages that it skips. Author: Masahiko Sawada Reviewed-by: Yugo Nagata and Jim Nasby Discussion: http://postgr.es/m/CAD21AoDZQKCxo0L39Mrq08cONNkXQKXuh=2DP1Q8ebmt35SoaA@mail.gmail.com
* Implement multivariate n-distinct coefficientsAlvaro Herrera2017-03-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for explicitly declared statistic objects (CREATE STATISTICS), allowing collection of statistics on more complex combinations that individual table columns. Companion commands DROP STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are added too. All this DDL has been designed so that more statistic types can be added later on, such as multivariate most-common-values and multivariate histograms between columns of a single table, leaving room for permitting columns on multiple tables, too, as well as expressions. This commit only adds support for collection of n-distinct coefficient on user-specified sets of columns in a single table. This is useful to estimate number of distinct groups in GROUP BY and DISTINCT clauses; estimation errors there can cause over-allocation of memory in hashed aggregates, for instance, so it's a worthwhile problem to solve. A new special pseudo-type pg_ndistinct is used. (num-distinct estimation was deemed sufficiently useful by itself that this is worthwhile even if no further statistic types are added immediately; so much so that another version of essentially the same functionality was submitted by Kyotaro Horiguchi: https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp though this commit does not use that code.) Author: Tomas Vondra. Some code rework by Álvaro. Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes, Ideriha Takeshi Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
* plpgsql: Don't generate parallel plans for RETURN QUERY.Robert Haas2017-03-24
| | | | | | | | | | | | | | | | | | | Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b allowed a parallel plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE statement in a PL/pgsql block, but that's a bad idea because plplgsql asks the executor for 50 rows at a time. That means that we'll always be running serially a plan that was intended for parallel execution, which is not a good idea. Fix by not requesting a parallel plan from the outset. Per discussion, back-patch to 9.6. There is a slight risk that, due to optimizer error, somebody could have a case where the parallel plan executed serially is actually faster than the supposedly-best serial plan, but the consensus seems to be that that's not sufficient justification for leaving 9.6 unpatched. Discussion: http://postgr.es/m/CA+TgmoZ_ZuH+auEeeWnmtorPsgc_SmP+XWbDsJ+cWvWBSjNwDQ@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
* Add a txid_status function.Robert Haas2017-03-24
| | | | | | | | | | | | If your connection to the database server is lost while a COMMIT is in progress, it may be difficult to figure out whether the COMMIT was successful or not. This function will tell you, provided that you don't wait too long to ask. It may be useful in other situations, too. Craig Ringer, reviewed by Simon Riggs and by me Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
* Avoid SnapshotResetXmin() during AtEOXact_Snapshot()Simon Riggs2017-03-24
| | | | | | | | | | For normal commits and aborts we already reset PgXact->xmin Avoiding touching highly contented shmem improves concurrent performance. Simon Riggs Discussion: CANP8+jJdXE9b+b9F8CQT-LuxxO0PBCB-SZFfMVAdp+akqo4zfg@mail.gmail.com
* Handle empty result set in libpqrcv_execPeter Eisentraut2017-03-24
| | | | | | | | Always return tupleslot and tupledesc from libpqrcv_exec. This avoids requiring callers to handle that separately. Author: Petr Jelinek <petr.jelinek@2ndquadrant.com> Reported-by: Michael Banck <michael.banck@credativ.de>
* Allow SCRAM authentication, when pg_hba.conf says 'md5'.Heikki Linnakangas2017-03-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
* Fix backup cancelingTeodor Sigaev2017-03-24
| | | | | | | | | | | | | | | | | Assert-enabled build crashes but without asserts it works by wrong way: it may not reset forcing full page write and preventing from starting exclusive backup with the same name as cancelled. Patch replaces pair of booleans nonexclusive_backup_running/exclusive_backup_running to single enum to correctly describe backup state. Backpatch to 9.6 where bug was introduced Reported-by: David Steele Authors: Michael Paquier, David Steele Reviewed-by: Anastasia Lubennikova https://commitfest.postgresql.org/13/1068/
* Avoid syntax error on platforms that have neither LOCALE_T nor ICU.Tom Lane2017-03-23
| | | | Buildfarm member anole sees this union as empty, and doesn't like it.
* Add ICU_FLAGS to one more placePeter Eisentraut2017-03-23
| | | | Reported-by: Thomas Munro <thomas.munro@enterprisedb.com>
* Fix crash in ICU patchPeter Eisentraut2017-03-23
| | | | This only happened with single-byte encodings.
* Fix enum definition.Robert Haas2017-03-23
| | | | | | | | Commit 249cf070e36721a65be74838c53acf8249faf935 assigned to one of the labels in the middle the value that should have been assigned to the first member of the enum. Rushabh's patch didn't have that defect as submitted, but I managed to mess it up while editing. Repair.
* ICU supportPeter Eisentraut2017-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a column collprovider to pg_collation that determines which library provides the collation data. The existing choices are default and libc, and this adds an icu choice, which uses the ICU4C library. The pg_locale_t type is changed to a union that contains the provider-specific locale handles. Users of locale information are changed to look into that struct for the appropriate handle to use. Also add a collversion column that records the version of the collation when it is created, and check at run time whether it is still the same. This detects potentially incompatible library upgrades that can corrupt indexes and other structures. This is currently only supported by ICU-provided collations. initdb initializes the default collation set as before from the `locale -a` output but also adds all available ICU locales with a "-x-icu" appended. Currently, ICU-provided collations can only be explicitly named collations. The global database locales are still always libc-provided. ICU support is enabled by configure --with-icu. Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com> Reviewed-by: Andreas Karlsson <andreas@proxel.se>
* Track the oldest XID that can be safely looked up in CLOG.Robert Haas2017-03-23
| | | | | | | | | | | | | | This provides infrastructure for looking up arbitrary, user-supplied XIDs without a risk of scary-looking failures from within the clog module. Normally, the oldest XID that can be safely looked up in CLOG is the same as the oldest XID that can reused without causing wraparound, and the latter is already tracked. However, while truncation is in progress, the values are different, so we must keep track of them separately. Craig Ringer, reviewed by Simon Riggs and by me. Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
* Remove createlang and droplangPeter Eisentraut2017-03-23
| | | | | | | They have been deprecated since PostgreSQL 9.1. Reviewed-by: Magnus Hagander <magnus@hagander.net> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
* Allow for parallel execution whenever ExecutorRun() is done only once.Robert Haas2017-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, it was unsafe to execute a plan in parallel if ExecutorRun() might be called with a non-zero row count. However, it's quite easy to fix things up so that we can support that case, provided that it is known that we will never call ExecutorRun() a second time for the same QueryDesc. Add infrastructure to signal this, and cross-checks to make sure that a caller who claims this is true doesn't later reneg. While that pattern never happens with queries received directly from a client -- there's no way to know whether multiple Execute messages will be sent unless the first one requests all the rows -- it's pretty common for queries originating from procedural languages, which often limit the result to a single tuple or to a user-specified number of tuples. This commit doesn't actually enable parallelism in any additional cases, because currently none of the places that would be able to benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the first place, but it makes it much more palatable to pass CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it eliminates some cases where we'd end up having to run the parallel plan serially. Patch by me, based on some ideas from Rafia Sabih and corrected by Rafia Sabih based on feedback from Dilip Kumar and myself. Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
* Reduce page locking in GIN vacuumTeodor Sigaev2017-03-23
| | | | | | | | | | | | | GIN vacuum during cleaning posting tree can lock this whole tree for a long time with by holding LockBufferForCleanup() on root. Patch changes it with two ways: first, cleanup lock will be taken only if there is an empty page (which should be deleted) and, second, it tries to lock only subtree, not the whole posting tree. Author: Andrey Borodin with minor editorization by me Reviewed-by: Jeff Davis, me https://commitfest.postgresql.org/13/896/
* Remove trailing comma from enum definitionPeter Eisentraut2017-03-23
| | | | Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Assorted compilation and test fixesPeter Eisentraut2017-03-23
| | | | | | related to 7c4f52409a8c7d85ed169bbbc1f6092274d03920, per build farm Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Minor spelling correction in commentSimon Riggs2017-03-23
| | | | Jon Nelson
* Replication lag tracking for walsendersSimon Riggs2017-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Adds write_lag, flush_lag and replay_lag cols to pg_stat_replication. Implements a lag tracker module that reports the lag times based upon measurements of the time taken for recent WAL to be written, flushed and replayed and for the sender to hear about it. These times represent the commit lag that was (or would have been) introduced by each synchronous commit level, if the remote server was configured as a synchronous standby. For an asynchronous standby, the replay_lag column approximates the delay before recent transactions became visible to queries. If the standby server has entirely caught up with the sending server and there is no more WAL activity, the most recently measured lag times will continue to be displayed for a short time and then show NULL. Physical replication lag tracking is automatic. Logical replication tracking is possible but is the responsibility of the logical decoding plugin. Tracking is a private module operating within each walsender individually, with values reported to shared memory. Module not used outside of walsender. Design and code is good enough now to commit - kudos to the author. In many ways a difficult topic, with important and subtle behaviour so this shoudl be expected to generate discussion and multiple open items: Test now! Author: Thomas Munro, following designs by Fujii Masao and Simon Riggs Review: Simon Riggs, Ian Barwick and Craig Ringer
* Logical replication support for initial data copyPeter Eisentraut2017-03-23
| | | | | | | | | | | | | | | | | | | | | | | Add functionality for a new subscription to copy the initial data in the tables and then sync with the ongoing apply process. For the copying, add a new internal COPY option to have the COPY source data provided by a callback function. The initial data copy works on the subscriber by receiving COPY data from the publisher and then providing it locally into a COPY that writes to the destination table. A WAL receiver can now execute full SQL commands. This is used here to obtain information about tables and publications. Several new options were added to CREATE and ALTER SUBSCRIPTION to control whether and when initial table syncing happens. Change pg_dump option --no-create-subscription-slots to --no-subscription-connect and use the new CREATE SUBSCRIPTION ... NOCONNECT option for that. Author: Petr Jelinek <petr.jelinek@2ndquadrant.com> Tested-by: Erik Rijkers <er@xs4all.nl>
* Fix grammar in commentMagnus Hagander2017-03-23
| | | | Author: Emil Iggland
* Expose waitforarchive option through pg_stop_backup()Stephen Frost2017-03-22
| | | | | | | | | | | | | | | | | Internally, we have supported the option to either wait for all of the WAL associated with a backup to be archived, or to return immediately. This option is useful to users of pg_stop_backup() as well, when they are reading the stop backup record position and checking that the WAL they need has been archived independently. This patch adds an additional, optional, argument to pg_stop_backup() which allows the user to indicate if they wish to wait for the WAL to be archived or not. The default matches current behavior, which is to wait. Author: David Steele, with some minor changes, doc updates by me. Reviewed by: Takayuki Tsunakawa, Fujii Masao Discussion: https://postgr.es/m/758e3fd1-45b4-5e28-75cd-e9e7f93a4c02@pgmasters.net
* Fix wrong costing of Sort under Gather Merge.Robert Haas2017-03-22
| | | | | | | | | There's no mechanism for such a sort to become a top-N sort, so we should pass -1 rather than limit_tuples to cost_sort(). Rushabh Lathia, per a report from Mithun Cy Discussion: http://postgr.es/m/CAGPqQf1akRcSgC9=6iwx=sEPap9UvPpHJLzg8_N+OuHdb6fL+g@mail.gmail.com
* Support multiple RADIUS serversMagnus Hagander2017-03-22
| | | | | | | | This changes all the RADIUS related parameters (radiusserver, radiussecret, radiusport, radiusidentifier) to be plural and to accept a comma separated list of servers, which will be tried in order. Reviewed by Adam Brightwell
* Correct erroneous comment in GetOldestXmin()Simon Riggs2017-03-22
| | | | Craig Ringer
* Refactor GetOldestXmin() to use flagsSimon Riggs2017-03-22
| | | | | | | Replace ignoreVacuum parameter with more flexible flags. Author: Eiji Seki Review: Haribabu Kommi