aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Remove a couple other vestigial yylex() declarations.Tom Lane2015-03-29
| | | | | These were workarounds for a long-gone flex bug; all supported versions of flex emit an extern declaration as expected.
* Add index-only scan support to inet GiST opclass.Heikki Linnakangas2015-03-28
| | | | Andreas Karlsson
* Fix GiST index-only scans for opclasses with different storage type.Heikki Linnakangas2015-03-26
| | | | | | | | | We cannot use the index's tuple descriptor directly to describe the index tuples returned in an index-only scan. That's because the index might use a different datatype for the values stored on disk than the type originally indexed. As long as they were both pass-by-ref, it worked, but will not work for pass-by-value types of different sizes. I noticed this as a crash when I started hacking a patch to add fetch methods to btree_gist.
* Tweak __attribute__-wrapping macros for better pgindent results.Tom Lane2015-03-26
| | | | | | | | | | | | | | | | | | | | | This improves on commit bbfd7edae5aa5ad5553d3c7e102f2e450d4380d4 by making two simple changes: * pg_attribute_noreturn now takes parentheses, ie pg_attribute_noreturn(). Likewise pg_attribute_unused(), pg_attribute_packed(). This reduces pgindent's tendency to misformat declarations involving them. * attributes are now always attached to function declarations, not definitions. Previously some places were taking creative shortcuts, which were not merely candidates for bad misformatting by pgindent but often were outright wrong anyway. (It does little good to put a noreturn annotation where callers can't see it.) In any case, if we would like to believe that these macros can be used with non-gcc compilers, we should avoid gratuitous variance in usage patterns. I also went through and manually improved the formatting of a lot of declarations, and got rid of excessively repetitive (and now obsolete anyway) comments informing the reader what pg_attribute_printf is for.
* Add support for index-only scans in GiST.Heikki Linnakangas2015-03-26
| | | | | | | | | | | | | | This adds a new GiST opclass method, 'fetch', which is used to reconstruct the original Datum from the value stored in the index. Also, the 'canreturn' index AM interface function gains a new 'attno' argument. That makes it possible to use index-only scans on a multi-column index where some of the opclasses support index-only scans but some do not. This patch adds support in the box and point opclasses. Other opclasses can added later as follow-on patches (btree_gist would be particularly interesting). Anastasia Lubennikova, with additional fixes and modifications by me.
* Minor cleanup of GiST code, for readability.Heikki Linnakangas2015-03-26
| | | | | Remove the gistcentryinit function, inlining the relevant part of it into the only caller.
* Suppress some unused-variable complaints in new LOCK_DEBUG code.Tom Lane2015-03-26
| | | | Jeff Janes
* Make SyncRepWakeQueue to a static functionTatsuo Ishii2015-03-26
| | | | | | | It is only used in src/backend/replication/syncrep.c. Back-patch to all supported branches except 9.1 which declares the function as static.
* Add an ASSERT statement in plpgsql.Tom Lane2015-03-25
| | | | | | | This is meant to make it easier to insert simple debugging cross-checks in plpgsql functions. Pavel Stehule, reviewed by Jim Nasby
* Centralize definition of integer limits.Andres Freund2015-03-25
| | | | | | | | | | | | | | | | | Several submitted and even committed patches have run into the problem that C89, our baseline, does not provide minimum/maximum values for various integer datatypes. C99's stdint.h does, but we can't rely on it. Several parts of the code defined limits locally, so instead centralize the definitions to c.h. This patch also changes the more obvious usages of literal limit values; there's more places that could be changed, but it's less clear whether it's beneficial to change those. Author: Andrew Gierth Discussion: 87619tc5wc.fsf@news-spur.riddles.org.uk
* Return ObjectAddress in many ALTER TABLE sub-routinesAlvaro Herrera2015-03-25
| | | | | | | | | | | | | | | | | | | Since commit a2e35b53c39b2a, most CREATE and ALTER commands return the ObjectAddress of the affected object. This is useful for event triggers to try to figure out exactly what happened. This patch extends this idea a bit further to cover ALTER TABLE as well: an auxiliary ObjectAddress is returned for each of several subcommands of ALTER TABLE. This makes it possible to decode with precision what happened during execution of any ALTER TABLE command; for instance, which constraint was added by ALTER TABLE ADD CONSTRAINT, or which parent got dropped from the parents list by ALTER TABLE NO INHERIT. As with the previous patch, there is no immediate user-visible change here. This is all really just continuing what c504513f83a9ee8 started. Reviewed by Stephen Frost.
* Reduce pinning and buffer content locking for btree scans.Kevin Grittner2015-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even though the main benefit of the Lehman and Yao algorithm for btrees is that no locks need be held between page reads in an index search, we were holding a buffer pin on each leaf page after it was read until we were ready to read the next one. The reason was so that we could treat this as a weak lock to create an "interlock" with vacuum's deletion of heap line pointers, even though our README file pointed out that this was not necessary for a scan using an MVCC snapshot. The main goal of this patch is to reduce the blocking of vacuum processes by in-progress btree index scans (including a cursor which is idle), but the code rearrangement also allows for one less buffer content lock to be taken when a forward scan steps from one page to the next, which results in a small but consistent performance improvement in many workloads. This patch leaves behavior unchanged for some cases, which can be addressed separately so that each case can be evaluated on its own merits. These unchanged cases are when a scan uses a non-MVCC snapshot, an index-only scan, and a scan of a btree index for which modifications are not WAL-logged. If later patches allow all of these cases to drop the buffer pin after reading a leaf page, then the btree vacuum process can be simplified; it will no longer need the "super-exclusive" lock to delete tuples from a page. Reviewed by Heikki Linnakangas and Kyotaro Horiguchi
* Add OID output argument to DefineTSConfigurationAlvaro Herrera2015-03-25
| | | | | | | ... which is set to the OID of a copied text search config, whenever the COPY clause is used. This is in the spirit of commit a2e35b53c39.
* Fix bug for array-formatted identities of user mappingsAlvaro Herrera2015-03-25
| | | | | | | | | | I failed to realize that server names reported in the object args array would get quoted, which is wrong; remove that, making sure that it's only quoted in the string-formatted identity. This bug was introduced by my commit cf34e373, which was backpatched, but since object name/args arrays are new in commit a676201490c8, there is no need to backpatch this any further.
* Fix gram.y comment to match realityAlvaro Herrera2015-03-25
| | | | | There are other comments in there that don't precisely match what's implemented, but this one confused me enough to be worth fixing.
* Add support for ALTER TABLE IF EXISTS ... RENAME CONSTRAINTBruce Momjian2015-03-24
| | | | | Also add regression test. Previously this was documented to work, but didn't.
* Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.Tom Lane2015-03-24
| | | | | | | | | | | | | | | | | | ExecOpenScanRelation assumed that any relation listed in the ExecRowMark list has been locked by InitPlan; but this is not true if the rel's markType is ROW_MARK_COPY, which is possible if it's a foreign table. In most (possibly all) cases, failure to acquire a lock here isn't really problematic because the parser, planner, or plancache would have taken the appropriate lock already. In principle though it might leave us vulnerable to working with a relation that we hold no lock on, and in any case if the executor isn't depending on previously-taken locks otherwise then it should not do so for ROW_MARK_COPY relations. Noted by Etsuro Fujita. Back-patch to all active versions, since the inconsistency has been there a long time. (It's almost certainly irrelevant in 9.0, since that predates foreign tables, but the code's still wrong on its own terms.)
* Apply table and domain CHECK constraints in name order.Tom Lane2015-03-23
| | | | | | | | | | | | | | | | | | | | | | | | Previously, CHECK constraints of the same scope were checked in whatever order they happened to be read from pg_constraint. (Usually, but not reliably, this would be creation order for domain constraints and reverse creation order for table constraints, because of differing implementation details.) Nondeterministic results of this sort are problematic at least for testing purposes, and in discussion it was agreed to be a violation of the principle of least astonishment. Therefore, borrow the principle already established for triggers, and apply such checks in name order (using strcmp() sort rules). This lets users control the check order if they have a mind to. Domain CHECK constraints still follow the rule of checking lower nested domains' constraints first; the name sort only applies to multiple constraints attached to the same domain. In passing, I failed to resist the temptation to wordsmith a bit in create_domain.sgml. Apply to HEAD only, since this could result in a behavioral change in existing applications, and the potential regression test failures have not actually been observed in our buildfarm.
* Don't delay replication for less than recovery_min_apply_delay's resolution.Andres Freund2015-03-23
| | | | | | | | | | | | | | | | | Recovery delays are implemented by waiting on a latch, and latches take milliseconds as a parameter. The required amount of waiting was computed using microsecond resolution though and the wait loop's abort condition was checking the delay in microseconds as well. This could lead to short spurts of busy looping when the overall wait time was below a millisecond, but above 0 microseconds. Instead just formulate the wait loop's abort condition in millisecond granularity as well. Given that that's recovery_min_apply_delay resolution, it seems harmless to not wait for less than a millisecond. Backpatch to 9.4 where recovery_min_apply_delay was introduced. Discussion: 20150323141819.GH26995@alap3.anarazel.de
* Fix copy & paste error in 4f1b890b137.Andres Freund2015-03-23
| | | | | | | | | Due to the bug delayed standbys would not delay when applying prepared transactions. Discussion: CAB7nPqT6BO1cCn+sAyDByBxA4EKZNAiPi2mFJ=ANeZmnmewRyg@mail.gmail.com Michael Paquier via Coverity.
* Remove ill-advised pre-check for DSM segment exhaustion.Robert Haas2015-03-23
| | | | | | | | dsm_control->nitems never decreases, so this is testing whether the server has *ever* run out of DSM segments, not whether it is *currently* out of DSM segments. Reported off-list by Amit Kapila.
* to_char: revert cc0d90b73b2e6dd2f301d46818a7265742c41a14Bruce Momjian2015-03-22
| | | | | | Revert "to_char(float4/8): zero pad to specified length". There are too many platform-specific problems, and the proper rounding is missing. Also revert companion patch 9d61b9953c1489cbb458ca70013cf5fca1bb7710.
* Fix minor copy & pasto in the int128 accumulator patch.Andres Freund2015-03-22
| | | | | It's unlikely that using PG_GETARG_INT16 instead of PG_GETARG_INT32 in this pace can cause actual problems, but this still should be fixed.
* Allow foreign tables to participate in inheritance.Tom Lane2015-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Foreign tables can now be inheritance children, or parents. Much of the system was already ready for this, but we had to fix a few things of course, mostly in the area of planner and executor handling of row locks. As side effects of this, allow foreign tables to have NOT VALID CHECK constraints (and hence to accept ALTER ... VALIDATE CONSTRAINT), and to accept ALTER SET STORAGE and ALTER SET WITH/WITHOUT OIDS. Continuing to disallow these things would've required bizarre and inconsistent special cases in inheritance behavior. Since foreign tables don't enforce CHECK constraints anyway, a NOT VALID one is a complete no-op, but that doesn't mean we shouldn't allow it. And it's possible that some FDWs might have use for SET STORAGE or SET WITH OIDS, though doubtless they will be no-ops for most. An additional change in support of this is that when a ModifyTable node has multiple target tables, they will all now be explicitly identified in EXPLAIN output, for example: Update on pt1 (cost=0.00..321.05 rows=3541 width=46) Update on pt1 Foreign Update on ft1 Foreign Update on ft2 Update on child3 -> Seq Scan on pt1 (cost=0.00..0.00 rows=1 width=46) -> Foreign Scan on ft1 (cost=100.00..148.03 rows=1170 width=46) -> Foreign Scan on ft2 (cost=100.00..148.03 rows=1170 width=46) -> Seq Scan on child3 (cost=0.00..25.00 rows=1200 width=46) This was done mainly to provide an unambiguous place to attach "Remote SQL" fields, but it is useful for inherited updates even when no foreign tables are involved. Shigeru Hanada and Etsuro Fujita, reviewed by Ashutosh Bapat and Kyotaro Horiguchi, some additional hacking by me
* Add TOAST table to pg_shseclabel for long label useBruce Momjian2015-03-21
| | | | Report by Andres Freund
* Use mmap MAP_NOSYNC option to limit shared memory writesBruce Momjian2015-03-21
| | | | | | | mmap() is rarely used for shared memory, but when it is, this option is useful, particularly on the BSDs. Patch by Sean Chittenden
* to_char(float4/8): zero pad to specified lengthBruce Momjian2015-03-21
| | | | | | | | | | Previously, zero padding was limited to the internal length, rather than the specified length. This allows it to match to_char(int/numeric), which always padded to the specified length. Regression tests added. BACKWARD INCOMPATIBILITY
* C comment: update lock level mention in commentBruce Momjian2015-03-20
| | | | Patch by Etsuro Fujita
* Use 128-bit math to accelerate some aggregation functions.Andres Freund2015-03-20
| | | | | | | | | | | | | | | | | On platforms where we support 128bit integers, use them to implement faster transition functions for sum(int8), avg(int8), var_*(int2/int4),stdev_*(int2/int4). Where not supported continue to use numeric as a transition type. In some synthetic benchmarks this has been shown to provide significant speedups. Bumps catversion. Discussion: 544BB5F1.50709@proxel.se Author: Andreas Karlsson Reviewed-By: Peter Geoghegan, Petr Jelinek, Andres Freund, Oskari Saarenmaa, David Rowley
* Fix whitespacePeter Eisentraut2015-03-19
|
* GetUserId() changes to has_privs_of_role()Stephen Frost2015-03-19
| | | | | | | | | | | | | | | | | The pg_stat and pg_signal-related functions have been using GetUserId() instead of has_privs_of_role() for checking if the current user should be able to see details in pg_stat_activity or signal other processes, requiring a user to do 'SET ROLE' for inheirited roles for a permissions check, unlike other permissions checks. This patch changes that behavior to, instead, act like most other permission checks and use has_privs_of_role(), removing the 'SET ROLE' need. Documentation and error messages updated accordingly. Per discussion with Alvaro, Peter, Adam (though not using Adam's patch), and Robert. Reviewed by Jeevan Chalke.
* Add flags argument to dsm_create.Robert Haas2015-03-19
| | | | | | | | Right now, there's only one flag, DSM_CREATE_NULL_IF_MAXSEGMENTS, which suppresses the error that would normally be thrown when the maximum number of segments already exists, instead returning NULL. It might be useful to add more flags in the future, such as one to ignore allocation errors, but I haven't done that here.
* Fix status reporting for terminated bgworkers that were never started.Robert Haas2015-03-19
| | | | | | | | | | | | | | | | | | | Previously, GetBackgroundWorkerPid() would return BGWH_NOT_YET_STARTED if the slot used for the worker registration had not been reused by unrelated activity, and BGWH_STOPPED if it had. Either way, a process that had requested notification when the state of one of its background workers changed did not receive such notifications. Fix things so that GetBackgroundWorkerPid() always returns BGWH_STOPPED in this situation, so that we do not erroneously give waiters the impression that the worker will eventually be started; and send notifications just as we would if the process terminated after having been started, so that it's possible to wait for the postmaster to process a worker termination request without polling. Discovered by Amit Kapila during testing of parallel sequential scan. Analysis and fix by me. Back-patch to 9.4; there may not be anyone relying on this interface yet, but if anyone is, the new behavior is a clear improvement.
* array_offset() and array_offsets()Alvaro Herrera2015-03-18
| | | | | | | | These functions return the offset position or positions of a value in an array. Author: Pavel Stěhule Reviewed by: Jim Nasby
* Setup cursor position for schema-qualified elementsAlvaro Herrera2015-03-18
| | | | | | | | This makes any errors thrown while looking up such schemas report the position of the error. Author: Ryan Kelly Reviewed by: Jeevan Chalke, Tom Lane
* Rationalize vacuuming options and parametersAlvaro Herrera2015-03-18
| | | | | | | | | | | | | | | | | | | We were involving the parser too much in setting up initial vacuuming parameters. This patch moves that responsibility elsewhere to simplify code, and also to make future additions easier. To do this, create a new struct VacuumParams which is filled just prior to vacuum execution, instead of at parse time; for user-invoked vacuuming this is set up in a new function ExecVacuum, while autovacuum sets it up by itself. While at it, add a new member VACOPT_SKIPTOAST to enum VacuumOption, only set by autovacuum, which is used to disable vacuuming of the toast table instead of the old do_toast parameter; this relieves the argument list of vacuum() and some callees a bit. This partially makes up for having added more arguments in an effort to avoid having autovacuum from constructing a VacuumStmt parse node. Author: Michael Paquier. Some tweaks by Álvaro Reviewed by: Robert Haas, Stephen Frost, Álvaro Herrera
* Fix out-of-array-bounds compiler warningAlvaro Herrera2015-03-16
| | | | | | | | | | | | | Since the array length check is using a post-increment operator, the compiler complains that there's a potential write to one element beyond the end of the array. This is not possible currently: the only path to this function is through pg_get_object_address(), which already verifies that the input array is no more than two elements in length. Still, a bug is a bug. No idea why my compiler doesn't complain about this ... Pointed out by Dead Rasheed and Peter Eisentraut
* Support opfamily members in get_object_addressAlvaro Herrera2015-03-16
| | | | | | | | | | | | | | | | | | | | | In the spirit of 890192e99af and 4464303405f: have get_object_address understand individual pg_amop and pg_amproc objects. There is no way to refer to such objects directly in the grammar -- rather, they are almost always considered an integral part of the opfamily that contains them. (The only case that deals with them individually is ALTER OPERATOR FAMILY ADD/DROP, which carries the opfamily address separately and thus does not need it to be part of each added/dropped element's address.) In event triggers it becomes possible to become involved with individual amop/amproc elements, and this commit enables pg_get_object_address to do so as well. To make the overall coding simpler, this commit also slightly changes the get_object_address representation for opclasses and opfamilies: instead of having the AM name in the objargs array, I moved it as the first element of the objnames array. This enables the new code to use objargs for the type names used by pg_amop and pg_amproc. Reviewed by: Stephen Frost
* Improve representation of PlanRowMark.Tom Lane2015-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes two inadequacies of the PlanRowMark representation. First, that the original LockingClauseStrength isn't stored (and cannot be inferred for foreign tables, which always get ROW_MARK_COPY). Since some PlanRowMarks are created out of whole cloth and don't actually have an ancestral RowMarkClause, this requires adding a dummy LCS_NONE value to enum LockingClauseStrength, which is fairly annoying but the alternatives seem worse. This fix allows getting rid of the use of get_parse_rowmark() in FDWs (as per the discussion around commits 462bd95705a0c23b and 8ec8760fc87ecde0), and it simplifies some things elsewhere. Second, that the representation assumed that all child tables in an inheritance hierarchy would use the same RowMarkType. That's true today but will soon not be true. We add an "allMarkTypes" field that identifies the union of mark types used in all a parent table's children, and use that where appropriate (currently, only in preprocess_targetlist()). In passing fix a couple of minor infelicities left over from the SKIP LOCKED patch, notably that _outPlanRowMark still thought waitPolicy is a bool. Catversion bump is required because the numeric values of enum LockingClauseStrength can appear in on-disk rules. Extracted from a much larger patch to support foreign table inheritance; it seemed worth breaking this out, since it's a separable concern. Shigeru Hanada and Etsuro Fujita, somewhat modified by me
* Merge the various forms of transaction commit & abort records.Andres Freund2015-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 465883b0a two versions of commit records have existed. A compact version that was used when no cache invalidations, smgr unlinks and similar were needed, and a full version that could deal with all that. Additionally the full version was embedded into twophase commit records. That resulted in a measurable reduction in the size of the logged WAL in some workloads. But more recently additions like logical decoding, which e.g. needs information about the database something was executed on, made it applicable in fewer situations. The static split generally made it hard to expand the commit record, because concerns over the size made it hard to add anything to the compact version. Additionally it's not particularly pretty to have twophase.c insert RM_XACT records. Rejigger things so that the commit and abort records only have one form each, including the twophase equivalents. The presence of the various optional (in the sense of not being in every record) pieces is indicated by a bits in the 'xinfo' flag. That flag previously was not included in compact commit records. To prevent an increase in size due to its presence, it's only included if necessary; signalled by a bit in the xl_info bits available for xact.c, similar to heapam.c's XLOG_HEAP_OPMASK/XLOG_HEAP_INIT_PAGE. Twophase commit/aborts are now the same as their normal counterparts. The original transaction's xid is included in an optional data field. This means that commit records generally are smaller, except in the case of a transaction with subtransactions, but no other special cases; the increase there is four bytes, which seems acceptable given that the more common case of not having subtransactions shrank. The savings are especially measurable for twophase commits, which previously always used the full version; but will in practice only infrequently have required that. The motivation for this work are not the space savings and and deduplication though; it's that it makes it easier to extend commit records with additional information. That's just a few lines of code now; without impacting the common case where that information is not needed. Discussion: 20150220152150.GD4149@awork2.anarazel.de, 235610.92468.qm%40web29004.mail.ird.yahoo.com Reviewed-By: Heikki Linnakangas, Simon Riggs
* Increase max_wal_size's default from 128MB to 1GB.Andres Freund2015-03-15
| | | | | | | | | | | | | | | | The introduction of min_wal_size & max_wal_size in 88e982302684 makes it feasible to increase the default upper bound in checkpoint size. Previously raising the default would lead to a increased disk footprint, even if more segments weren't beneficial. The low default of checkpoint size is one of common performance problem users have thus increasing the default makes sense. Setups where the increase in maximum disk usage is a problem will very likely have to run with a modified configuration anyway. Discussion: 54F4EFB8.40202@agliodbs.com, CA+TgmoZEAgX5oMGJOHVj8L7XOkAe05Gnf45rP40m-K3FhZRVKg@mail.gmail.com Author: Josh Berkus, after a discussion involving lots of people.
* Remove pause_at_recovery_target recovery.conf setting.Andres Freund2015-03-15
| | | | | | | | | The new recovery_target_action (introduced in aedccb1f6/b8e33a85d4) replaces it's functionality. Having both seems likely to cause more confusion than it saves worry due to the incompatibility. Discussion: 5484FC53.2060903@2ndquadrant.com Author: Petr Jelinek
* Suppress maybe-uninitialized compiler warnings.Fujii Masao2015-03-15
| | | | | | | Previously some compilers were thinking that the variables that 57aa5b2 added maybe-uninitialized. Spotted by Andres Freund
* Fix integer overflow in debug message of walreceiverTatsuo Ishii2015-03-14
| | | | | | | | | | | The message tries to tell the replication apply delay which fails if the first WAL record is not applied yet. Fix is, instead of telling overflowed minus numeric, showing "N/A" which indicates that the delay data is not yet available. Problem reported by me and patch by Fabrízio de Royes Mello. Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not have the debug message).
* Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.Tom Lane2015-03-12
| | | | | | | | | | | | | | | | | | | | The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting tableoid to InvalidOid, I think on the assumption that the referenced RTE must be a subquery or other case without a meaningful OID. However, foreign tables also use this code path, and they do have meaningful table OIDs; so failure to set the tuple field can lead to user-visible misbehavior. Fix that by fetching the appropriate OID from the range table. There's still an issue about whether CTID can ever have a meaningful value in this case; at least with postgres_fdw foreign tables, it does. But that is a different problem that seems to require a significantly different patch --- it's debatable whether postgres_fdw really wants to use this code path at all. Simplified version of a patch by Etsuro Fujita, who also noted the problem to begin with. The issue can be demonstrated in all versions having FDWs, so back-patch to 9.1.
* Fix memory leaks in GIN index vacuum.Heikki Linnakangas2015-03-12
| | | | | Per bug #12850 by Walter Nordmann. Backpatch to 9.4 where the leak was introduced.
* Support flattening of empty-FROM subqueries and one-row VALUES tables.Tom Lane2015-03-11
| | | | | | | | | We can't handle this in the general case due to limitations of the planner's data representations; but we can allow it in many useful cases, by being careful to flatten only when we are pulling a single-row subquery up into a FROM (or, equivalently, inner JOIN) node that will still have at least one remaining relation child. Per discussion of an example from Kyotaro Horiguchi.
* Fix old bug in get_loop_count().Tom Lane2015-03-11
| | | | | | | | | | | | | | | | | | | | While poking at David Kubečka's issue I noticed an ancient logic error in get_loop_count(): it used 1.0 as a "no data yet" indicator, but since that is actually a valid rowcount estimate, this doesn't work. If we have one input relation with 1.0 as rowcount and then another one with a larger rowcount, we should use 1.0 as the result, but we picked the larger rowcount instead. (I think when I coded this, I recognized the conflict, but mistakenly thought that the logic would pick the desired count anyway.) Fixing this changed the plan for one existing regression test case. Since the point of that test is to exercise creation of a particular shape of nestloop plan, I tweaked the query a little bit so it still results in the same plan choice. This is definitely a bug, but I'm hesitant to back-patch since it might change plan choices unexpectedly, and anyway failure to implement a heuristic precisely as intended is a pretty low-grade bug.
* Improve planner's cost estimation in the presence of semijoins.Tom Lane2015-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have a semijoin, say SELECT * FROM x WHERE x1 IN (SELECT y1 FROM y) and we're estimating the cost of a parameterized indexscan on x, the number of repetitions of the indexscan should not be taken as the size of y; it'll really only be the number of distinct values of y1, because the only valid plan with y on the outside of a nestloop would require y to be unique-ified before joining it to x. Most of the time this doesn't make that much difference, but sometimes it can lead to drastically underestimating the cost of the indexscan and hence choosing a bad plan, as pointed out by David Kubečka. Fixing this is a bit difficult because parameterized indexscans are costed out quite early in the planning process, before we have the information that would be needed to call estimate_num_groups() and thereby estimate the number of distinct values of the join column(s). However we can move the code that extracts a semijoin RHS's unique-ification columns, so that it's done in initsplan.c rather than on-the-fly in create_unique_path(). That shouldn't make any difference speed-wise and it's really a bit cleaner too. The other bit of information we need is the size of the semijoin RHS, which is easy if it's a single relation (we make those estimates before considering indexscan costs) but problematic if it's a join relation. The solution adopted here is just to use the product of the sizes of the join component rels. That will generally be an overestimate, but since estimate_num_groups() only uses this input as a clamp, an overestimate shouldn't hurt us too badly. In any case we don't allow this new logic to produce a value larger than we would have chosen before, so that at worst an overestimate leaves us no wiser than we were before.
* Support default ACLs in get_object_addressAlvaro Herrera2015-03-11
| | | | | | | | | | | | | In the spirit of 890192e99af, this time add support for the things living in the pg_default_acl catalog. These are not really "objects", but they show up as such in event triggers. There is no "DROP DEFAULT PRIVILEGES" or similar command, so it doesn't look like the new representation given would be useful anywhere else, so I didn't try to use it outside objectaddress.c. (That might be a bug in itself, but that would be material for another commit.) Reviewed by Stephen Frost.