aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils/cache/lsyscache.c
Commit message (Collapse)AuthorAge
...
* Fix getTypeIOParam to support type record[].Tom Lane2011-12-01
| | | | | | | | | | | | | Since record[] uses array_in, it needs to have its element type passed as typioparam. In HEAD and 9.1, this fix essentially reverts commit 9bc933b2125a5358722490acbc50889887bf7680, which was a hack that is no longer needed since domains don't set their typelem anymore. Before that, adjust the logic so that only domains are excluded from being treated like arrays, rather than assuming that only base types should be included. Add a regression test to demonstrate the need for this. Per report from Maxim Boguk. Back-patch to 8.4, where type record[] was added.
* Further code review for range types patch.Tom Lane2011-11-20
| | | | | Fix some bugs in coercion logic and pg_dump; more comment cleanup; minor cosmetic improvements.
* Support range data types.Heikki Linnakangas2011-11-03
| | | | | | | Selectivity estimation functions are missing for some range type operators, which is a TODO. Jeff Davis
* Remove assumptions that not-equals operators cannot be in any opclass.Tom Lane2011-07-06
| | | | | | | | | | | | | | | get_op_btree_interpretation assumed this in order to save some duplication of code, but it's not true in general anymore because we added <> support to btree_gist. (We still assume it for btree opclasses, though.) Also, essentially the same logic was baked into predtest.c. Get rid of that duplication by generalizing get_op_btree_interpretation so that it can be used by predtest.c. Per bug report from Denis de Bernardy and investigation by Jeff Davis, though I didn't use Jeff's patch exactly as-is. Back-patch to 9.1; we do not support this usage before that.
* Move Trigger and TriggerDesc structs out of rel.h into a new reltrigger.hAlvaro Herrera2011-07-04
| | | | | This lets us stop including rel.h into execnodes.h, which is a widely used header.
* Fix failure to check whether a rowtype's component types are sortable.Tom Lane2011-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | The existence of a btree opclass accepting composite types caused us to assume that every composite type is sortable. This isn't true of course; we need to check if the column types are all sortable. There was logic for this for the case of array comparison (ie, check that the element type is sortable), but we missed the point for rowtypes. Per Teodor's report of an ANALYZE failure for an unsortable composite type. Rather than just add some more ad-hoc logic for this, I moved knowledge of the issue into typcache.c. The typcache will now only report out array_eq, record_cmp, and friends as usable operators if the array or composite type will work with those functions. Unfortunately we don't have enough info to do this for anonymous RECORD types; in that case, just assume it will work, and take the runtime failure as before if it doesn't. This patch might be a candidate for back-patching at some point, but given the lack of complaints from the field, I'd rather just test it in HEAD for now. Note: most of the places touched in this patch will need further work when we get around to supporting hashing of record types.
* pgindent run before PG 9.1 beta 1.Bruce Momjian2011-04-10
|
* Clean up a few failures to set collation fields in expression nodes.Tom Lane2011-03-26
| | | | | | | | | I'm not sure these have any non-cosmetic implications, but I'm not sure they don't, either. In particular, ensure the CaseTestExpr generated by transformAssignmentIndirection to represent the base target column carries the correct collation, because parse_collate.c won't fix that. Tweak lsyscache.c API so that we can get the appropriate collation without an extra syscache lookup.
* Pass collation to makeConst() instead of looking it up internally.Tom Lane2011-03-25
| | | | | | | | | In nearly all cases, the caller already knows the correct collation, and in a number of places, the value the caller has handy is more correct than the default for the type would be. (In particular, this patch makes it significantly less likely that eval_const_expressions will result in changing the exposed collation of an expression.) So an internal lookup is both expensive and wrong.
* Per-column collation supportPeter Eisentraut2011-02-08
| | | | | | | | This adds collation support for columns and domains, a COLLATE clause to override it per expression, and B-tree index support. Peter Eisentraut reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
* Stamp copyrights for year 2011.Bruce Momjian2011-01-01
|
* Create core infrastructure for KNNGIST.Tom Lane2010-12-02
| | | | | | | | | | | | | | | | | | | This is a heavily revised version of builtin_knngist_core-0.9. The ordering operators are no longer mixed in with actual quals, which would have confused not only humans but significant parts of the planner. Instead, ordering operators are carried separately throughout planning and execution. Since the API for ambeginscan and amrescan functions had to be changed anyway, this commit takes the opportunity to rationalize that a bit. RelationGetIndexScan no longer forces a premature index_rescan call; instead, callers of index_beginscan must call index_rescan too. Aside from making the AM-side initialization logic a bit less peculiar, this has the advantage that we do not make a useless extra am_rescan call when there are runtime key values. AMs formerly could not assume that the key values passed to amrescan were actually valid; now they can. Teodor Sigaev and Tom Lane
* Create the system catalog infrastructure needed for KNNGIST.Tom Lane2010-11-24
| | | | | | | | | | | | | | | | This commit adds columns amoppurpose and amopsortfamily to pg_amop, and column amcanorderbyop to pg_am. For the moment all the entries in amcanorderbyop are "false", since the underlying support isn't there yet. Also, extend the CREATE OPERATOR CLASS/ALTER OPERATOR FAMILY commands with [ FOR SEARCH | FOR ORDER BY sort_operator_family ] clauses to allow the new columns of pg_amop to be populated, and create pg_dump support for dumping that information. I also added some documentation, although it's perhaps a bit premature given that the feature doesn't do anything useful yet. Teodor Sigaev, Robert Haas, Tom Lane
* Provide hashing support for arrays.Tom Lane2010-10-30
| | | | | | | | | | | | | | | | | | | | | The core of this patch is hash_array() and associated typcache infrastructure, which works just about exactly like the existing support for array comparison. In addition I did some work to ensure that the planner won't think that an array type is hashable unless its element type is hashable, and similarly for sorting. This includes adding a datatype parameter to op_hashjoinable and op_mergejoinable, and adding an explicit "hashable" flag to SortGroupClause. The lack of a cross-check on the element type was a pre-existing bug in mergejoin support --- but it didn't matter so much before, because if you couldn't sort the element type there wasn't any good alternative to failing anyhow. Now that we have the alternative of hashing the array type, there are cases where we can avoid a failure by being picky at the planner stage, so it's time to be picky. The issue of exactly how to combine the per-element hash values to produce an array hash is still open for discussion, but the rest of this is pretty solid, so I'll commit it as-is.
* Improve handling of domains over arrays.Tom Lane2010-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch eliminates various bizarre behaviors caused by sloppy thinking about the difference between a domain type and its underlying array type. In particular, the operation of updating one element of such an array has to be considered as yielding a value of the underlying array type, *not* a value of the domain, because there's no assurance that the domain's CHECK constraints are still satisfied. If we're intending to store the result back into a domain column, we have to re-cast to the domain type so that constraints are re-checked. For similar reasons, such a domain can't be blindly matched to an ANYARRAY polymorphic parameter, because the polymorphic function is likely to apply array-ish operations that could invalidate the domain constraints. For the moment, we just forbid such matching. We might later wish to insert an automatic downcast to the underlying array type, but such a change should also change matching of domains to ANYELEMENT for consistency. To ensure that all such logic is rechecked, this patch removes the original hack of setting a domain's pg_type.typelem field to match its base type; the typelem will always be zero instead. In those places where it's really okay to look through the domain type with no other logic changes, use the newly added get_base_element_type function in place of get_element_type. catversion bumped due to change in pg_type contents. Per bug #5717 from Richard Huxton and subsequent discussion.
* Remove cvs keywords from all files.Magnus Hagander2010-09-20
|
* Standardize get_whatever_oid functions for object types withRobert Haas2010-08-05
| | | | | | | | | | | | | unqualified names. - Add a missing_ok parameter to get_tablespace_oid. - Avoid duplicating get_tablespace_od guts in objectNamesToOids. - Add a missing_ok parameter to get_database_oid. - Replace get_roleid and get_role_checked with get_role_oid. - Add get_namespace_oid, get_language_oid, get_am_oid. - Refactor existing code to use new interfaces. Thanks to KaiGai Kohei for the review.
* Avoid an Assert failure in deconstruct_array() by making get_attstatsslot()Tom Lane2010-07-09
| | | | | | | | | | | | | | | | use the actual element type of the array it's disassembling, rather than trusting the type OID passed in by its caller. This is needed because sometimes the planner passes in a type OID that's only binary-compatible with the target column's type, rather than being an exact match. Per an example from Bernd Helmle. Possibly we should refactor get_attstatsslot/free_attstatsslot to not expect the caller to supply type ID data at all, but for now I'll just do the minimum-change fix. Back-patch to 7.4. Bernd's test case only crashes back to 8.0, but since these subroutines are the same in 7.4, I suspect there may be variant cases that would crash 7.4 as well.
* Patch revoked because of objections.Simon Riggs2010-04-24
|
* Add missing optimizer hooks for function cost and number of rows.Simon Riggs2010-04-23
| | | | | Closely follow design of other optimizer hooks: if hook exists retrieve value from plugin; if still not set then get from cache.
* pgindent run for 9.0Bruce Momjian2010-02-26
|
* Wrap calls to SearchSysCache and related functions using macros.Robert Haas2010-02-14
| | | | | | | | | | | | The purpose of this change is to eliminate the need for every caller of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists, GetSysCacheOid, and SearchSysCacheList to know the maximum number of allowable keys for a syscache entry (currently 4). This will make it far easier to increase the maximum number of keys in a future release should we choose to do so, and it makes the code shorter, too. Design and review by Tom Lane.
* When estimating the selectivity of an inequality "column > constant" orTom Lane2010-01-04
| | | | | | | | | | | | | | | "column < constant", and the comparison value is in the first or last histogram bin or outside the histogram entirely, try to fetch the actual column min or max value using an index scan (if there is an index on the column). If successful, replace the lower or upper histogram bound with that value before carrying on with the estimate. This limits the estimation error caused by moving min/max values when the comparison value is close to the min or max. Per a complaint from Josh Berkus. It is tempting to consider using this mechanism for mergejoinscansel as well, but that would inject index fetches into main-line join estimation not just endpoint cases. I'm refraining from that until we can get a better handle on the costs of doing this type of lookup.
* Update copyright for the year 2010.Bruce Momjian2010-01-02
|
* Add the ability to store inheritance-tree statistics in pg_statistic,Tom Lane2009-12-29
| | | | | | | | and teach ANALYZE to compute such stats for tables that have subclasses. Per my proposal of yesterday. autovacuum still needs to be taught about running ANALYZE on parent tables when their subclasses change, but the feature is useful even without that.
* Extend EXPLAIN to support output in XML or JSON format.Tom Lane2009-08-10
| | | | | | | There are probably still some adjustments to be made in the details of the output, but this gets the basic structure in place. Robert Haas
* 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef listBruce Momjian2009-06-11
| | | | provided by Andrew.
* Update copyright for 2009.Bruce Momjian2009-01-01
|
* Add hooks to let plugins override the planner's lookups in pg_statistic.Tom Lane2008-09-28
| | | | Simon Riggs, with some editorialization by me.
* Rearrange the querytree representation of ORDER BY/GROUP BY/DISTINCT itemsTom Lane2008-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | as per my recent proposal: 1. Fold SortClause and GroupClause into a single node type SortGroupClause. We were already relying on them to be struct-equivalent, so using two node tags wasn't accomplishing much except to get in the way of comparing items with equal(). 2. Add an "eqop" field to SortGroupClause to carry the associated equality operator. This is cheap for the parser to get at the same time it's looking up the sort operator, and storing it eliminates the need for repeated not-so-cheap lookups during planning. In future this will also let us represent GROUP/DISTINCT operations on datatypes that have hash opclasses but no btree opclasses (ie, they have equality but no natural sort order). The previous representation simply didn't work for that, since its only indicator of comparison semantics was a sort operator. 3. Add a hasDistinctOn boolean to struct Query to explicitly record whether the distinctClause came from DISTINCT or DISTINCT ON. This allows removing some complicated and not 100% bulletproof code that attempted to figure that out from the distinctClause alone. This patch doesn't in itself create any new capability, but it's necessary infrastructure for future attempts to use hash-based grouping for DISTINCT and UNION/INTERSECT/EXCEPT.
* Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()Tom Lane2008-07-30
| | | | | | | | | | | | | | | | | | | | | with system catalog lookups, as was foreseen to be necessary almost since their creation. Instead put the information into two new pg_type columns, typcategory and typispreferred. Add support for setting these when creating a user-defined base type. The category column is just a "char" (i.e. a poor man's enum), allowing a crude form of user extensibility of the category list: just use an otherwise-unused character. This seems sufficient for foreseen uses, but we could upgrade to having an actual category catalog someday, if there proves to be a huge demand for custom type categories. In this patch I have attempted to hew exactly to the behavior of the previous hardwired logic, except for introducing new type categories for arrays, composites, and enums. In particular the default preferred state for user-defined types remains TRUE. That seems worth revisiting, but it should be done as a separate patch from introducing the infrastructure. Likewise, any adjustment of the standard set of categories should be done separately.
* Since createplan.c no longer cares whether index operators are lossy, it hasTom Lane2008-04-13
| | | | | | | | | | no particular need to do get_op_opfamily_properties() while building an indexscan plan. Postpone that lookup until executor start. This simplifies createplan.c a lot more than it complicates nodeIndexscan.c, and makes things more uniform since we already had to do it that way for RowCompare expressions. Should be a bit faster too, at least for plans that aren't re-used many times, since we avoid palloc'ing and perhaps copying the intermediate list data structure.
* Simplify and standardize conversions between TEXT datums and ordinary CTom Lane2008-03-25
| | | | | | | | | | | | | | | | | | | | strings. This patch introduces four support functions cstring_to_text, cstring_to_text_with_len, text_to_cstring, and text_to_cstring_buffer, and two macros CStringGetTextDatum and TextDatumGetCString. A number of existing macros that provided variants on these themes were removed. Most of the places that need to make such conversions now require just one function or macro call, in place of the multiple notational layers that used to be needed. There are no longer any direct calls of textout or textin, and we got most of the places that were using handmade conversions via memcpy (there may be a few still lurking, though). This commit doesn't make any serious effort to eliminate transient memory leaks caused by detoasting toasted text objects before they reach text_to_cstring. We changed PG_GETARG_TEXT_P to PG_GETARG_TEXT_PP in a few places where it was easy, but much more could be done. Brendan Jurd and Tom Lane
* Update copyrights in source tree to 2008.Bruce Momjian2008-01-01
|
* pgindent run for 8.3.Bruce Momjian2007-11-15
|
* Fix ALTER COLUMN TYPE to preserve the tablespace and reloptions of indexesTom Lane2007-10-13
| | | | | | | | | | it affects. The original coding neglected tablespace entirely (causing the indexes to move to the database's default tablespace) and for an index belonging to a UNIQUE or PRIMARY KEY constraint, it would actually try to assign the parent table's reloptions to the index :-(. Per bug #3672 and subsequent investigation. 8.0 and 8.1 did not have reloptions, but the tablespace bug is present.
* Support arrays of composite types, including the rowtypes of regular tablesTom Lane2007-05-11
| | | | | | | | | | | | | | | and views (but not system catalogs, nor sequences or toast tables). Get rid of the hardwired convention that a type's array type is named exactly "_type", instead using a new column pg_type.typarray to provide the linkage. (It still will be named "_type", though, except in odd corner cases such as maximum-length type names.) Along the way, make tracking of owner and schema dependencies for types more uniform: a type directly created by the user has these dependencies, while a table rowtype or auto-generated array type does not have them, but depends on its parent object instead. David Fetter, Andrew Dunstan, Tom Lane
* Support enum data types. Along the way, use macros for the values ofTom Lane2007-04-02
| | | | | pg_type.typtype whereever practical. Tom Dunstan, with some kibitzing from Tom Lane.
* Fix 8.2 breakage of domains over array types, and add a regression test caseTom Lane2007-03-19
| | | | to cover it. Per report from Anton Pikhteryev.
* Fix up the remaining places where the expression node structure would loseTom Lane2007-03-17
| | | | | | | | | | | | | | available information about the typmod of an expression; namely, Const, ArrayRef, ArrayExpr, and EXPR and ARRAY SubLinks. In the ArrayExpr and SubLink cases it wasn't really the data structure's fault, but exprTypmod() being lazy. This seems like a good idea in view of the expected increase in typmod usage from Teodor's work to allow user-defined types to have typmods. In particular this responds to the concerns we had about eliminating the special-purpose hack that exprTypmod() used to have for BPCHAR Consts. We can now tell whether or not such a Const has been cast to a specific length, and report or display properly if so. initdb forced due to changes in stored rules.
* Fix up foreign-key mechanism so that there is a sound semantic basis for theTom Lane2007-02-14
| | | | | | | | | | | | | | | | | | | | | equality checks it applies, instead of a random dependence on whatever operators might be named "=". The equality operators will now be selected from the opfamily of the unique index that the FK constraint depends on to enforce uniqueness of the referenced columns; therefore they are certain to be consistent with that index's notion of equality. Among other things this should fix the problem noted awhile back that pg_dump may fail for foreign-key constraints on user-defined types when the required operators aren't in the search path. This also means that the former warning condition about "foreign key constraint will require costly sequential scans" is gone: if the comparison condition isn't indexable then we'll reject the constraint entirely. All per past discussions. Along the way, make the RI triggers look into pg_constraint for their information, instead of using pg_trigger.tgargs; and get rid of the always error-prone fixed-size string buffers in ri_triggers.c in favor of building up the RI queries in StringInfo buffers. initdb forced due to columns added to pg_constraint and pg_trigger.
* Add support for cross-type hashing in hash index searches and hash joins.Tom Lane2007-01-30
| | | | | | Hashing for aggregation purposes still needs work, so it's not time to mark any cross-type operators as hashable for general use, but these cases work if the operators are so marked by hand in the system catalogs.
* Add COST and ROWS options to CREATE/ALTER FUNCTION, plus underlying pg_procTom Lane2007-01-22
| | | | | | | | | | | | columns procost and prorows, to allow simple user adjustment of the estimated cost of a function call, as well as control of the estimated number of rows returned by a set-returning function. We might eventually wish to extend this to allow function-specific estimation routines, but there seems to be consensus that we should try a simple constant estimate first. In particular this provides a relatively simple way to control the order in which different WHERE clauses are applied in a plan node, which is a Good Thing in view of the fact that the recent EquivalenceClass planner rewrite made that much less predictable than before.
* Refactor some lsyscache routines to eliminate duplicate code and saveTom Lane2007-01-21
| | | | a couple of syscache lookups in make_pathkey_from_sortinfo().
* Refactor planner's pathkeys data structure to create a separate, explicitTom Lane2007-01-20
| | | | | | | | | | | | | | representation of equivalence classes of variables. This is an extensive rewrite, but it brings a number of benefits: * planner no longer fails in the presence of "incomplete" operator families that don't offer operators for every possible combination of datatypes. * avoid generating and then discarding redundant equality clauses. * remove bogus assumption that derived equalities always use operators named "=". * mergejoins can work with a variety of sort orders (e.g., descending) now, instead of tying each mergejoinable operator to exactly one sort order. * better recognition of redundant sort columns. * can make use of equalities appearing underneath an outer join.
* Change the planner-to-executor API so that the planner tells the executorTom Lane2007-01-10
| | | | | | | | | | | | | | | | which comparison operators to use for plan nodes involving tuple comparison (Agg, Group, Unique, SetOp). Formerly the executor looked up the default equality operator for the datatype, which was really pretty shaky, since it's possible that the data being fed to the node is sorted according to some nondefault operator class that could have an incompatible idea of equality. The planner knows what it has sorted by and therefore can provide the right equality operator to use. Also, this change moves a couple of catalog lookups out of the executor and into the planner, which should help startup time for pre-planned queries by some small amount. Modify the planner to remove some other cavalier assumptions about always being able to use the default operators. Also add "nulls first/last" info to the Plan node for a mergejoin --- neither the executor nor the planner can cope yet, but at least the API is in place.
* Support ORDER BY ... NULLS FIRST/LAST, and add ASC/DESC/NULLS FIRST/NULLS LASTTom Lane2007-01-09
| | | | | | | | | | | | per-column options for btree indexes. The planner's support for this is still pretty rudimentary; it does not yet know how to plan mergejoins with nondefault ordering options. The documentation is pretty rudimentary, too. I'll work on improving that stuff later. Note incompatible change from prior behavior: ORDER BY ... USING will now be rejected if the operator is not a less-than or greater-than member of some btree opclass. This prevents less-than-sane behavior if an operator that doesn't actually define a proper sort ordering is selected.
* Update CVS HEAD for 2007 copyright. Back branches are typically notBruce Momjian2007-01-05
| | | | back-stamped for this.
* Support type modifiers for user-defined types, and pull most knowledgeTom Lane2006-12-30
| | | | | | about typmod representation for standard types out into type-specific typmod I/O functions. Teodor Sigaev, with some editorialization by Tom Lane.
* Restructure operator classes to allow improved handling of cross-data-typeTom Lane2006-12-23
| | | | | | | | | | | | | | | | cases. Operator classes now exist within "operator families". While most families are equivalent to a single class, related classes can be grouped into one family to represent the fact that they are semantically compatible. Cross-type operators are now naturally adjunct parts of a family, without having to wedge them into a particular opclass as we had done originally. This commit restructures the catalogs and cleans up enough of the fallout so that everything still works at least as well as before, but most of the work needed to actually improve the planner's behavior will come later. Also, there are not yet CREATE/DROP/ALTER OPERATOR FAMILY commands; the only way to create a new family right now is to allow CREATE OPERATOR CLASS to make one by default. I owe some more documentation work, too. But that can all be done in smaller pieces once this infrastructure is in place.