aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils
Commit message (Collapse)AuthorAge
* Allow I/O reliability checks using 16-bit checksumsSimon Riggs2013-03-22
| | | | | | | | | | | | | | | | | | | Checksums are set immediately prior to flush out of shared buffers and checked when pages are read in again. Hint bit setting will require full page write when block is dirtied, which causes various infrastructure changes. Extensive comments, docs and README. WARNING message thrown if checksum fails on non-all zeroes page; ERROR thrown but can be disabled with ignore_checksum_failure = on. Feature enabled by an initdb option, since transition from option off to option on is long and complex and has not yet been implemented. Default is not to use checksums. Checksum used is WAL CRC-32 truncated to 16-bits. Simon Riggs, Jeff Davis, Greg Smith Wide input and assistance from many community members. Thank you.
* Change commit_delay to be SUSET for 9.3+Simon Riggs2013-03-22
| | | | | | | | | | Prior to 9.3 the commit_delay affected only the current user, whereas now only the group leader waits while holding the WALWriteLock. Deliberate or accidental settings to a poor value could seriously degrade performance for all users. Privileges may be delegated by SECURITY DEFINER functions for anyone that needs per-user settings in real situations. Request for change from Peter Geoghegan
* Fix "element <@ range" cost estimation.Heikki Linnakangas2013-03-21
| | | | | | | | | | | | | The statistics-based cost estimation patch for range types broke that, by incorrectly assuming that the left operand of all range oeprators is a range. That lead to a "type x is not a range type" error. Because it took so long for anyone to notice, add a regression test for that case. We still don't do proper statistics-based cost estimation for that, so you just get a default constant estimate. We should look into implementing that, but this patch at least fixes the regression. Spotted by Tom Lane, when testing query from Josh Berkus.
* Allow extracting machine-readable object identityAlvaro Herrera2013-03-20
| | | | | | | | | | | Introduce pg_identify_object(oid,oid,int4), which is similar in spirit to pg_describe_object but instead produces a row of machine-readable information to uniquely identify the given object, without resorting to OIDs or other internal representation. This is intended to be used in the event trigger implementation, to report objects being operated on; but it has usefulness of its own. Catalog version bumped because of the new function.
* Improve signal-handler lockout mechanism in timeout.c.Tom Lane2013-03-17
| | | | | | | | | | | Rather than doing a fairly-expensive setitimer() call to prevent interrupts from happening, let's just invent a simple boolean flag that the signal handler is required to check. This is not only faster but considerably more robust than before, since the previous code effectively assumed that only ITIMER_REAL events would ever fire the SIGALRM handler, which is obviously something that can be broken easily by third-party code. Zoltán Böszörményi and Tom Lane
* Move pqsignal() to libpgport.Tom Lane2013-03-17
| | | | | | | | | We had two copies of this function in the backend and libpq, which was already pretty bogus, but it turns out that we need it in some other programs that don't use libpq (such as pg_test_fsync). So put it where it probably should have been all along. The signal-mask-initialization support in src/backend/libpq/pqsignal.c stays where it is, though, since we only need that in the backend.
* Add lock_timeout configuration parameter.Tom Lane2013-03-16
| | | | | | | | | | | | | This GUC allows limiting the time spent waiting to acquire any one heavyweight lock. In support of this, improve the recently-added timeout infrastructure to permit efficiently enabling or disabling multiple timeouts at once. That reduces the performance hit from turning on lock_timeout, though it's still not zero. Zoltán Böszörményi, reviewed by Tom Lane, Stephen Frost, and Hari Babu
* Extend format() to handle field width and left/right alignment.Tom Lane2013-03-14
| | | | | | This change adds some more standard sprintf() functionality to format(). Pavel Stehule, reviewed by Dean Rasheed and Kyotaro Horiguchi
* Add cost estimation of range @> and <@ operators.Heikki Linnakangas2013-03-14
| | | | | | | | | | | The estimates are based on the existing lower bound histogram, and a new histogram of range lengths. Bump catversion, because the range length histogram now needs to be present in statistic slot kind 6, or you get an error on @> and <@ queries. (A re-ANALYZE would be enough to fix that, though) Alexander Korotkov, with some refactoring by me.
* JSON generation improvements.Andrew Dunstan2013-03-10
| | | | | | | | | | | | | | | | | | This adds the following: json_agg(anyrecord) -> json to_json(any) -> json hstore_to_json(hstore) -> json (also used as a cast) hstore_to_json_loose(hstore) -> json The last provides heuristic treatment of numbers and booleans. Also, in json generation, if any non-builtin type has a cast to json, that function is used instead of the type's output function. Andrew Dunstan, reviewed by Steve Singer. Catalog version bumped.
* SP-GiST support of the range adjacent operator -|-Heikki Linnakangas2013-03-08
| | | | Alexander Korotkov, reviewed by Jeff Davis.
* Arrange to cache FdwRoutine structs in foreign tables' relcache entries.Tom Lane2013-03-06
| | | | | | | This saves several catalog lookups per reference. It's not all that exciting right now, because we'd managed to minimize the number of places that need to fetch the data; but the upcoming writable-foreign-tables patch needs this info in a lot more places.
* Code beautification for object-access hook machinery.Robert Haas2013-03-06
| | | | KaiGai Kohei
* Fix to_char() to use ASCII-only case-folding rules where appropriate.Tom Lane2013-03-05
| | | | | | | | | | | | | | formatting.c used locale-dependent case folding rules in some code paths where the result isn't supposed to be locale-dependent, for example to_char(timestamp, 'DAY'). Since the source data is always just ASCII in these cases, that usually didn't matter ... but it does matter in Turkish locales, which have unusual treatment of "i" and "I". To confuse matters even more, the misbehavior was only visible in UTF8 encoding, because in single-byte encodings we used pg_toupper/pg_tolower which don't have locale-specific behavior for ASCII characters. Fix by providing intentionally ASCII-only case-folding functions and using these where appropriate. Per bug #7913 from Adnan Dursun. Back-patch to all active branches, since it's been like this for a long time.
* Fix overflow check in tm2timestamp (this time for sure).Tom Lane2013-03-04
| | | | | | I fixed this code back in commit 841b4a2d5, but didn't think carefully enough about the behavior near zero, which meant it improperly rejected 1999-12-31 24:00:00. Per report from Magnus Hagander.
* Fix map_sql_value_to_xml_value() to treat domains like their base types.Tom Lane2013-03-03
| | | | | | | | | | | | This was already the case for domains over arrays, but not for domains over certain built-in types such as boolean. The special formatting rules for those types should apply to domains over them as well. Per discussion. While this is a bug fix, it's also a behavioral change that seems likely to trip up some applications. So no back-patch. Pavel Stehule
* Add a materialized view relations.Kevin Grittner2013-03-03
| | | | | | | | | | | | | | | | | | | | | | A materialized view has a rule just like a view and a heap and other physical properties like a table. The rule is only used to populate the table, references in queries refer to the materialized data. This is a minimal implementation, but should still be useful in many cases. Currently data is only populated "on demand" by the CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements. It is expected that future releases will add incremental updates with various timings, and that a more refined concept of defining what is "fresh" data will be developed. At some point it may even be possible to have queries use a materialized in place of references to underlying tables, but that requires the other above-mentioned features to be working first. Much of the documentation work by Robert Haas. Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja Security review by KaiGai Kohei, with a decision on how best to implement sepgsql still pending.
* Move relpath() to libpgcommonAlvaro Herrera2013-02-21
| | | | | | | This enables non-backend code, such as pg_xlogdump, to use it easily. The previous location, in src/backend/catalog/catalog.c, made that essentially impossible because that file depends on many backend-only facilities; so this needs to live separately.
* Split pgstat file in smaller piecesAlvaro Herrera2013-02-18
| | | | | | | | | | | | | | | | We now write one file per database and one global file, instead of having the whole thing in a single huge file. This reduces the I/O that must be done when partial data is required -- which is all the time, because each process only needs information on its own database anyway. Also, the autovacuum launcher does not need data about tables and functions in each database; having the global stats for all DBs is enough. Catalog version bumped because we have a new subdir under PGDATA. Author: Tomas Vondra. Some rework by Álvaro Testing by Jeff Janes Other discussion by Heikki Linnakangas, Tom Lane.
* Add ALTER ROLE ALL SET commandPeter Eisentraut2013-02-17
| | | | | | | | This generalizes the existing ALTER ROLE ... SET and ALTER DATABASE ... SET functionality to allow creating settings that apply to all users in all databases. reviewed by Pavel Stehule
* Fix CVE-2013-0255 properly.Tom Lane2013-02-13
| | | | | | | | | | | | Revert commit ab0f7b6089fd215f6ce6081e2e222c38d643a526 (in HEAD only) in favor of the proper solution, which is to declare enum_recv() correctly in the system catalogs. It should be declared to take type "internal" not "cstring". Also improve the type_sanity regression test, which should have caught this typo, so that it actually would. Most of the relevant checks on the signature of type I/O functions should not have been restricted to basetypes/pseudotypes, as they should apply to any type's I/O functions.
* Create libpgcommon, and move pg_malloc et al to itAlvaro Herrera2013-02-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | libpgcommon is a new static library to allow sharing code among the various frontend programs and backend; this lets us eliminate duplicate implementations of common routines. We avoid libpgport, because that's intended as a place for porting issues; per discussion, it seems better to keep them separate. The first use case, and the only implemented by this patch, is pg_malloc and friends, which many frontend programs were already using. At the same time, we can use this to provide palloc emulation functions for the frontend; this way, some palloc-using files in the backend can also be used by the frontend cleanly. To do this, we change palloc() in the backend to be a function instead of a macro on top of MemoryContextAlloc(). This was previously believed to cause loss of performance, but this implementation has been tweaked by Tom and Andres so that on modern compilers it provides a slight improvement over the previous one. This lets us clean up some places that were already with localized hacks. Most of the pg_malloc/palloc changes in this patch were authored by Andres Freund. Zoltán Böszörményi also independently provided a form of that. libpgcommon infrastructure was authored by Álvaro.
* Simplify box_overlap computations.Tom Lane2013-02-08
| | | | | | | | | | | | Given the assumption that a box's high coordinates are not less than its low coordinates, the tests in box_ov() are overly complicated and can be reduced to about half as much work. Since many other functions in geo_ops.c rely on that assumption, there doesn't seem to be a good reason not to use it here. Per discussion of Alexander Korotkov's GiST fix, which was already using the simplified logic (in a non-fuzzy form, but the equivalence holds just as well for fuzzy).
* Enable building with Microsoft Visual Studio 2012.Andrew Dunstan2013-02-06
| | | | | | Backpatch to release 9.2 Brar Piening and Noah Misch, reviewed by Craig Ringer.
* Prevent execution of enum_recv() from SQL.Tom Lane2013-02-04
| | | | | | | | | | | | | | | | | This function was misdeclared to take cstring when it should take internal. This at least allows crashing the server, and in principle an attacker might be able to use the function to examine the contents of server memory. The correct fix is to adjust the system catalog contents (and fix the regression tests that should have caught this but failed to). However, asking users to correct the catalog contents in existing installations is a pain, so as a band-aid fix for the back branches, install a check in enum_recv() to make it throw error if called with a cstring argument. We will later revert this in HEAD in favor of correcting the catalogs. Our thanks to Sumit Soni (via Secunia SVCRP) for reporting this issue. Security: CVE-2013-0255
* Reset vacuum_defer_cleanup_age to PGC_SIGHUP.Simon Riggs2013-02-04
| | | | Revert commit 84725aa5efe11688633b553e58113efce4181f2e
* Perform line wrapping and indenting by default in ruleutils.c.Tom Lane2013-02-03
| | | | | | | | | | | | | | | This patch changes pg_get_viewdef() and allied functions so that PRETTY_INDENT processing is always enabled. Per discussion, only the PRETTY_PAREN processing (that is, stripping of "unnecessary" parentheses) poses any real forward-compatibility risk, so we may as well make dump output look as nice as we safely can. Also, set the default wrap length to zero (i.e, wrap after each SELECT or FROM list item), since there's no very principled argument for the former default of 80-column wrapping, and most people seem to agree this way looks better. Marko Tiikkaja, reviewed by Jeevan Chalke, further hacking by Tom Lane
* Mark vacuum_defer_cleanup_age as PGC_POSTMASTER.Simon Riggs2013-02-02
| | | | Following bug analysis of #7819 by Tom Lane
* Reject nonzero day fields in AT TIME ZONE INTERVAL functions.Tom Lane2013-01-31
| | | | | | | | | | It's not sensible for an interval that's used as a time zone value to be larger than a day. When we changed the interval type to contain a separate day field, check_timezone() was adjusted to reject nonzero day values, but timetz_izone(), timestamp_izone(), and timestamptz_izone() evidently were overlooked. While at it, make the error messages for these three cases consistent.
* Provide database object names as separate fields in error messages.Tom Lane2013-01-29
| | | | | | | | | | | | | | | | | | This patch addresses the problem that applications currently have to extract object names from possibly-localized textual error messages, if they want to know for example which index caused a UNIQUE_VIOLATION failure. It adds new error message fields to the wire protocol, which can carry the name of a table, table column, data type, or constraint associated with the error. (Since the protocol spec has always instructed clients to ignore unrecognized field types, this should not create any compatibility problem.) Support for providing these new fields has been added to just a limited set of error reports (mainly, those in the "integrity constraint violation" SQLSTATE class), but we will doubtless add them to more calls in future. Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with additional hacking by Tom Lane.
* Allow CREATE TABLE IF EXIST so succeed if the schema is nonexistentBruce Momjian2013-01-26
| | | | | | Previously, CREATE TABLE IF EXIST threw an error if the schema was nonexistent. This was done by passing 'missing_ok' to the function that looks up the schema oid.
* Change plan caching to honor, not resist, changes in search_path.Tom Lane2013-01-25
| | | | | | | | | | | | | | | | | | | | | | In the initial implementation of plan caching, we saved the active search_path when a plan was first cached, then reinstalled that path anytime we needed to reparse or replan. The idea of that was to try to reselect the same referenced objects, in somewhat the same way that views continue to refer to the same objects in the face of schema or name changes. Of course, that analogy doesn't bear close inspection, since holding the search_path fixed doesn't cope with object drops or renames. Moreover sticking with the old path seems to create more surprises than it avoids. So instead of doing that, consider that the cached plan depends on search_path, and force reparse/replan if the active search_path is different than it was when we last saved the plan. This gets us fairly close to having "transparency" of plan caching, in the sense that the cached statement acts the same as if you'd just resubmitted the original query text for another execution. There are still some corner cases where this fails though: a new object added in the search path schema(s) might capture a reference in the query text, but we'd not realize that and force a reparse. We might try to fix that in the future, but for the moment it looks too expensive and complicated.
* Fix concat() and format() to handle VARIADIC-labeled arguments correctly.Tom Lane2013-01-25
| | | | | | | | Previously, the VARIADIC labeling was effectively ignored, but now these functions act as though the array elements had all been given as separate arguments. Pavel Stehule
* Improve concurrency of foreign key lockingAlvaro Herrera2013-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
* Add infrastructure for storing a VARIADIC ANY function's VARIADIC flag.Tom Lane2013-01-21
| | | | | | | | | | | | | | | | | Originally we didn't bother to mark FuncExprs with any indication whether VARIADIC had been given in the source text, because there didn't seem to be any need for it at runtime. However, because we cannot fold a VARIADIC ANY function's arguments into an array (since they're not necessarily all the same type), we do actually need that information at runtime if VARIADIC ANY functions are to respond unsurprisingly to use of the VARIADIC keyword. Add the missing field, and also fix ruleutils.c so that VARIADIC ANY function calls are dumped properly. Extracted from a larger patch that also fixes concat() and format() (the only two extant VARIADIC ANY functions) to behave properly when VARIADIC is specified. This portion seems appropriate to review and commit separately. Pavel Stehule
* Add ddl_command_end support for event triggers.Robert Haas2013-01-21
| | | | Dimitri Fontaine, with slight changes by me
* Fix error-checking typo in check_TSCurrentConfig().Tom Lane2013-01-20
| | | | | | The code failed to detect an out-of-memory failure. Xi Wang
* Fix an O(N^2) performance issue for sessions modifying many relations.Tom Lane2013-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | AtEOXact_RelationCache() scanned the entire relation cache at the end of any transaction that created a new relation or assigned a new relfilenode. Thus, clients such as pg_restore had an O(N^2) performance problem that would start to be noticeable after creating 10000 or so tables. Since typically only a small number of relcache entries need any cleanup, we can fix this by keeping a small list of their OIDs and doing hash_searches for them. We fall back to the full-table scan if the list overflows. Ideally, the maximum list length would be set at the point where N hash_searches would cost just less than the full-table scan. Some quick experimentation says that point might be around 50-100; I (tgl) conservatively set MAX_EOXACT_LIST = 32. For the case that we're worried about here, which is short single-statement transactions, it's unlikely there would ever be more than about a dozen list entries anyway; so it's probably not worth being too tense about the value. We could avoid the hash_searches by instead keeping the target relcache entries linked into a list, but that would be noticeably more complicated and bug-prone because of the need to maintain such a list in the face of relcache entry drops. Since a relcache entry can only need such cleanup after a somewhat-heavyweight filesystem operation, trying to save a hash_search per cleanup doesn't seem very useful anyway --- it's the scan over all the not-needing-cleanup entries that we wish to avoid here. Jeff Janes, reviewed and tweaked a bit by Tom Lane
* Improve memory space management in tuplesort and tuplestore.Tom Lane2013-01-17
| | | | | | | | | | | | | | | | | | | | | The code originally just doubled the size of the tuple-pointer array so long as that would fit in allowedMem. This could result in failing to use as much as half of allowedMem, if (as is typical) the last doubling attempt didn't quite fit. Worse, we might double the array size but be unable to use most of the added slots, because there was no room left within the allowedMem limit for tuples the slots should point to. To fix, double only so long as we've used less than half of allowedMem in total. Then do one more array enlargement, but scale it based on total memory consumption so far. This will work nicely as long as the average tuple size is reasonably stable, and in any case should be better than the old method. This change will result in large sort operations consuming a larger fraction of work_mem than they typically did in the past. The release notes should mention that users may want to revisit their work_mem settings, if they'd tuned those settings based on the old behavior of sorting. Jeff Janes, reviewed by Peter Geoghegan and Robert Haas
* Base the default SSL ciphers on DEFAULT instead of ALLMagnus Hagander2013-01-17
| | | | | | | | It's better to start from what the OpenSSL people consider a good default and then remove insecure things (low encryption, exportable encryption and md5 at this point) from that, instead of starting from everything that exists and remove from that. We trust the OpenSSL people to make good choices about what the default is.
* Fix hash_update_hash_key() to handle same-bucket case correctly.Tom Lane2013-01-14
| | | | | | Original coding would corrupt the hashtable if the item being updated was at the end of its bucket chain and the new hash key hashed to that same bucket. Diagnosis and fix by Heikki Linnakangas.
* Reject out-of-range dates in to_date().Tom Lane2013-01-14
| | | | | | | | | Dates outside the supported range could be entered, but would not print reasonably, and operations such as conversion to timestamp wouldn't behave sanely either. Since this has the potential to result in undumpable table data, it seems worth back-patching. Hitoshi Harada
* Prevent very-low-probability PANIC during PREPARE TRANSACTION.Tom Lane2013-01-13
| | | | | | | | | | | | | | | | | | | | | The code in PostPrepare_Locks supposed that it could reassign locks to the prepared transaction's dummy PGPROC by deleting the PROCLOCK table entries and immediately creating new ones. This was safe when that code was written, but since we invented partitioning of the shared lock table, it's not safe --- another process could steal away the PROCLOCK entry in the short interval when it's on the freelist. Then, if we were otherwise out of shared memory, PostPrepare_Locks would have to PANIC, since it's too late to back out of the PREPARE at that point. Fix by inventing a dynahash.c function to atomically update a hashtable entry's key. (This might possibly have other uses in future.) This is an ancient bug that in principle we ought to back-patch, but the odds of someone hitting it in the field seem really tiny, because (a) the risk window is small, and (b) nobody runs servers with maxed-out lock tables for long, because they'll be getting non-PANIC out-of-memory errors anyway. So fixing it in HEAD seems sufficient, at least until the new code has gotten some testing.
* Make spelling more uniformPeter Eisentraut2013-01-13
|
* Update comments for elog_start().Tom Lane2013-01-13
| | | | Forgot I was going to do this as part of the previous patch ...
* Redesign the planner's handling of index-descent cost estimation.Tom Lane2013-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically we've used a couple of very ad-hoc fudge factors to try to get the right results when indexes of different sizes would satisfy a query with the same number of index leaf tuples being visited. In commit 21a39de5809cd3050a37d2554323cc1d0cbeed9d I tweaked one of these fudge factors, with results that proved disastrous for larger indexes. Commit bf01e34b556ff37982ba2d882db424aa484c0d07 fudged it some more, but still with not a lot of principle behind it. What seems like a better way to address these issues is to explicitly model index-descent costs, since that's what's really at stake when considering diferent indexes with similar leaf-page-level costs. We tried that once long ago, and found that charging random_page_cost per page descended through was way too much, because upper btree levels tend to stay in cache in real-world workloads. However, there's still CPU costs to think about, and the previous fudge factors can be seen as a crude attempt to account for those costs. So this patch replaces those fudge factors with explicit charges for the number of tuple comparisons needed to descend the index tree, plus a small charge per page touched in the descent. The cost multipliers are chosen so that the resulting charges are in the vicinity of the historical (pre-9.2) fudge factors for indexes of up to about a million tuples, while not ballooning unreasonably beyond that, as the old fudge factor did (even more so in 9.2). To make this work accurately for btree indexes, add some code that allows extraction of the known root-page height from a btree. There's no equivalent number readily available for other index types, but we can use the log of the number of index pages as an approximate substitute. This seems like too much of a behavioral change to risk back-patching, but it should improve matters going forward. In 9.2 I'll just revert the fudge-factor change.
* Make some spelling more consistentPeter Eisentraut2013-01-05
|
* Invent a "one-shot" variant of CachedPlans for better performance.Tom Lane2013-01-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPI_execute() and related functions create a CachedPlan, execute it once, and immediately discard it, so that the functionality offered by plancache.c is of no value in this code path. And performance measurements show that the extra data copying and invalidation checking done by plancache.c slows down simple queries by 10% or more compared to 9.1. However, enough of the SPI code is shared with functions that do need plan caching that it seems impractical to bypass plancache.c altogether. Instead, let's invent a variant version of cached plans that preserves 99% of the API but doesn't offer any of the actual functionality, nor the overhead. This puts SPI_execute() performance back on par, or maybe even slightly better, than it was before. This change should resolve recent complaints of performance degradation from Dong Ye, Pavel Stehule, and others. By avoiding data copying, this change also reduces the amount of memory needed to execute many-statement SPI_execute() strings, as for instance in a recent complaint from Tomas Vondra. An additional benefit of this change is that multi-statement SPI_execute() query strings are now processed fully serially, that is we complete execution of earlier statements before running parse analysis and planning on following ones. This eliminates a long-standing POLA violation, in that DDL that affects the behavior of a later statement will now behave as expected. Back-patch to 9.2, since this was a performance regression compared to 9.1. (In 9.2, place the added struct fields so as to avoid changing the offsets of existing fields.) Heikki Linnakangas and Tom Lane
* Fix IsUnderPostmaster/EXEC_BACKEND confusionAlvaro Herrera2013-01-02
|
* Set MaxBackends only on bootstrap and standalone modesAlvaro Herrera2013-01-02
| | | | | | | | ... not on auxiliary processes. I managed to overlook the fact that I had disabled assertions on my HEAD checkout long ago. Hopefully this will turn the buildfarm green again, and put an end to today's silliness.