aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Fix typo in comment.Robert Haas2012-04-13
|
* Update lazy_scan_heap header comment.Robert Haas2012-04-13
| | | | | The previous comment described how things worked in PostgreSQL 8.2 and prior.
* Assorted spelling corrections.Tom Lane2012-04-12
| | | | Thom Brown
* Fix cost estimation for indexscan filter conditions.Tom Lane2012-04-11
| | | | | | | | | | | | | | | | | | | cost_index's method for estimating per-tuple costs of evaluating filter conditions (a/k/a qpquals) was completely wrong in the presence of derived indexable conditions, such as range conditions derived from a LIKE clause. This was largely masked in common cases as a result of all simple operator clauses having about the same costs, but it could show up in a big way when dealing with functional indexes containing expensive functions, as seen for example in bug #6579 from Istvan Endredy. Rejigger the calculation to give sane answers when the indexquals aren't a subset of the baserestrictinfo list. As a side benefit, we now do the calculation properly for cases involving join clauses (ie, parameterized indexscans), which we always overestimated before. There are still cases where this is an oversimplification, such as clauses that can be dropped because they are implied by a partial index's predicate. But we've never accounted for that in cost estimates before, and I'm not convinced it's worth the cycles to try to do so.
* Silently ignore any nonexistent schemas that are listed in search_path.Tom Lane2012-04-11
| | | | | | | | | | | | | | | | | | | | | | Previously we attempted to throw an error or at least warning for missing schemas, but this was done inconsistently because of implementation restrictions (in many cases, GUC settings are applied outside transactions so that we can't do system catalog lookups). Furthermore, there were exceptions to the rule even in the beginning, and we'd been poking more and more holes in it as time went on, because it turns out that there are lots of use-cases for having some irrelevant items in a common search_path value. It seems better to just adopt a philosophy similar to what's always been done with Unix PATH settings, wherein nonexistent or unreadable directories are silently ignored. This commit also fixes the documentation to point out that schemas for which the user lacks USAGE privilege are silently ignored. That's always been true but was previously not documented. This is mostly in response to Robert Haas' complaint that 9.1 started to throw errors or warnings for missing schemas in cases where prior releases had not. We won't adopt such a significant behavioral change in a back branch, so something different will be needed in 9.1.
* Accept postgres:// URIs in libpq connection functionsAlvaro Herrera2012-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | postgres:// URIs are an attempt to "stop the bleeding" in this general area that has been said to occur due to external projects adopting their own syntaxes. The syntaxes supported by this patch: postgres://[user[:pwd]@][unix-socket][:port[/dbname]][?param1=value1&...] postgres://[user[:pwd]@][net-location][:port][/dbname][?param1=value1&...] should be enough to cover most interesting cases without having to resort to "param=value" pairs, but those are provided for the cases that need them regardless. libpq documentation has been shuffled around a bit, to avoid stuffing all the format details into the PQconnectdbParams description, which was already a bit overwhelming. The list of keywords has moved to its own subsection, and the details on the URI format live in another subsection. This includes a simple test program, as requested in discussion, to ensure that interesting corner cases continue to work appropriately in the future. Author: Alexander Shulgin Some tweaking by Álvaro Herrera, Greg Smith, Daniel Farina, Peter Eisentraut Reviewed by Robert Haas, Alexey Klyukin (offlist), Heikki Linnakangas, Marko Kreen, and others Oh, it also supports postgresql:// but that's probably just an accident.
* Make pg_tablespace_location(0) return the database's default tablespace.Tom Lane2012-04-10
| | | | | | | This definition is convenient when applying the function to the reltablespace column of pg_class, since that's what zero means there; and it doesn't interfere with any other plausible use of the function. Per gripe from Bruce Momjian.
* Fix pg_upgrade to properly upgrade a table that is stored in the clusterBruce Momjian2012-04-10
| | | | | | | | | default tablespace, but part of a database that is in a user-defined tablespace. Caused "file not found" error during upgrade. Per bug report from Ants Aasma. Backpatch to 9.1 and 9.0.
* NLS: Initialize Project-Id-Version field by xgettextPeter Eisentraut2012-04-10
| | | | | Since xgettext provides options to do this now, we might as well use them.
* psql: Improve tab completion of WITHPeter Eisentraut2012-04-10
| | | | | | | Only match when WITH is the first word, as WITH may appear in many other contexts. Josh Kupershmidt
* Measure epoch of timestamp-without-time-zone from local not UTC midnight.Tom Lane2012-04-10
| | | | | | | | | | | | | | | | | This patch reverts commit 191ef2b407f065544ceed5700e42400857d9270f and thereby restores the pre-7.3 behavior of EXTRACT(EPOCH FROM timestamp-without-tz). Per discussion, the more recent behavior was misguided on a couple of grounds: it makes it hard to get a non-timezone-aware epoch value for a timestamp, and it makes this one case dependent on the value of the timezone GUC, which is incompatible with having timestamp_part() labeled as immutable. The other behavior is still available (in all releases) by explicitly casting the timestamp to timestamp with time zone before applying EXTRACT. This will need to be called out as an incompatible change in the 9.2 release notes. Although having mutable behavior in a function marked immutable is clearly a bug, we're not going to back-patch such a change.
* Point the URL to PL/py directly to the page about the procedural language.Heikki Linnakangas2012-04-10
| | | | | It used to point to a top-level page that contains client-side tools as well. It was hard to find the procedural language there.
* Fix typos in docs, some words were doubled.Heikki Linnakangas2012-04-10
| | | | Thom Brown
* Adjust various references to GEQO being non-deterministic.Tom Lane2012-04-09
| | | | | | | It's still non-deterministic in some sense ... but given fixed settings and identical planning problems, it will now always choose the same plan, so we probably shouldn't tar it with that brush. Per bug #6565 from Guillaume Cottenceau. Back-patch to 9.0 where the behavior was fixed.
* Re-add documentation recommendation to use gzip/gunzip for archive fileBruce Momjian2012-04-09
| | | | storage.
* Update documentation to more clearly label the streaming replication option.Bruce Momjian2012-04-09
|
* Remove documentation mention of pglesslog, which was added in 2009Bruce Momjian2012-04-09
| | | | because there was only a beta for 9.0 and it does not compile on 9.1.
* Fix an Assert that turns out to be reachable after all.Tom Lane2012-04-09
| | | | | | | | | estimate_num_groups() gets unhappy with create table empty(); select * from empty except select * from empty e2; I can't see any actual use-case for such a query (and the table is illegal per SQL spec), but it seems like a good idea that it not cause an assert failure.
* Don't bother copying empty support arrays in a zero-column MergeJoin.Tom Lane2012-04-09
| | | | | | | | The case could not arise when this code was originally written, but it can now (since we made zero-column MergeJoins work for the benefit of FULL JOIN ON TRUE). I don't think there is any actual bug here, but we might as well treat it consistently with other uses of COPY_POINTER_FIELD(). Per comment from Ashutosh Bapat.
* Save a few cycles while creating "sticky" entries in pg_stat_statements.Tom Lane2012-04-09
| | | | | | | | | There's no need to sit there and increment the stats when we know all the increments would be zero anyway. The actual additions might not be very expensive, but skipping acquisition of the spinlock seems like a good thing. Pushing the logic about initialization of the usage count down into entry_alloc() allows us to do that while making the code actually simpler, not more complex. Expansion on a suggestion by Peter Geoghegan.
* Remove link to ODBCng project from the docs.Heikki Linnakangas2012-04-09
| | | | | Thom Browne pointed out that the URL was out of date, and Devrim GÜNDÜZ pointed out that the project isn't maintained anymore.
* Teach SLRU code to avoid replacing I/O-busy pages.Robert Haas2012-04-08
| | | | Patch by me; review by Tom Lane and others.
* Improve management of "sticky" entries in contrib/pg_stat_statements.Tom Lane2012-04-08
| | | | | | | | | | | | | This patch addresses a deficiency in the previous pg_stat_statements patch. We want to give sticky entries an initial "usage" factor high enough that they probably will stick around until their query is completed. However, if the query never completes (eg it gets an error during execution), the entry shouldn't persist indefinitely. Manage this by starting out with a usage setting equal to the (approximate) median usage value within the whole hashtable, but decaying the value much more aggressively than we do for normal entries. Peter Geoghegan
* set_stack_base() no longer needs to be called in PostgresMain.Heikki Linnakangas2012-04-08
| | | | | | This was a thinko in previous commit. Now that stack base pointer is now set in PostmasterMain and SubPostmasterMain, it doesn't need to be set in PostgresMain anymore.
* Do stack-depth checking in all postmaster children.Heikki Linnakangas2012-04-08
| | | | | | | | | | | | | | | | | | | We used to only initialize the stack base pointer when starting up a regular backend, not in other processes. In particular, autovacuum workers can run arbitrary user code, and without stack-depth checking, infinite recursion in e.g an index expression will bring down the whole cluster. The comment about PL/Java using set_stack_base() is not yet true. As the code stands, PL/java still modifies the stack_base_ptr variable directly. However, it's been discussed in the PL/Java mailing list that it should be changed to use the function, because PL/Java is currently oblivious to the register stack used on Itanium. There's another issues with PL/Java, namely that the stack base pointer it sets is not really the base of the stack, it could be something close to the bottom of the stack. That's a separate issue that might need some further changes to this code, but that's a different story. Backpatch to all supported releases.
* Fix incorrect make maintainer-clean rule.Tom Lane2012-04-07
|
* Further adjustment of comment about qsort_tuple.Tom Lane2012-04-07
|
* Remove useless variable to suppress compiler warning.Tom Lane2012-04-07
|
* Stamp libraries versions for 9.2 (better late than never).Bruce Momjian2012-04-07
|
* Update URL for pgtclng project.Tom Lane2012-04-06
| | | | Thom Brown
* Fix misleading output from gin_desc().Tom Lane2012-04-06
| | | | | | | | | | | XLOG_GIN_UPDATE_META_PAGE and XLOG_GIN_DELETE_LISTPAGE records were printed with a list link field labeled as "blkno", which was confusing, especially when the link was empty (InvalidBlockNumber). Print the metapage block number instead, since that's what's actually being updated. We could include the link values too as a separate field, but not clear it's worth the trouble. Back-patch to 8.4 where the dubious code was added.
* Fix broken comparetup_datum code.Tom Lane2012-04-06
| | | | | | | | Commit 337b6f5ecf05b21b5e997986884d097d60e4e3d0 contained the entirely fanciful assumption that it had made comparetup_datum unreachable. Reported and patched by Takashi Yamamoto. Fix up some not terribly accurate/useful comments from that commit, too.
* Fix some typos in the documentationPeter Eisentraut2012-04-06
| | | | Thom Brown
* Correct various system catalog/view definitions in the documentationPeter Eisentraut2012-04-06
| | | | Thom Brown
* Dept of second thoughts: improve the API for AnalyzeForeignTable.Tom Lane2012-04-06
| | | | | | | If we make the initially-called function return the table physical-size estimate, acquire_inherited_sample_rows will be able to use that to allocate numbers of samples among child tables, when the day comes that we want to support foreign tables in inheritance trees.
* Allow statistics to be collected for foreign tables.Tom Lane2012-04-06
| | | | | | | | | | | ANALYZE now accepts foreign tables and allows the table's FDW to control how the sample rows are collected. (But only manual ANALYZEs will touch foreign tables, for the moment, since among other things it's not very clear how to handle remote permissions checks in an auto-analyze.) contrib/file_fdw is extended to support this. Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
* Add DROP INDEX CONCURRENTLY [IF EXISTS], uses ShareUpdateExclusiveLockSimon Riggs2012-04-06
|
* checkopint -> checkpointRobert Haas2012-04-05
| | | | Report by Guillaume Lelarge.
* Put back code inadvertently deleted from exit_nicely.Robert Haas2012-04-05
| | | | Report by Andrew Dunstan.
* NLS: Use msgmerge/xgettext --no-wrap and --sort-by-filePeter Eisentraut2012-04-05
| | | | | | The option --no-wrap prevents wars with (most?) editors about proper line wrapping. --sort-by-file ensures consistent file order, for easier diffing.
* Allow pg_archivecleanup to strip optional file extensions.Robert Haas2012-04-05
| | | | | Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself. e
* Publish checkpoint timing information to pg_stat_bgwriter.Robert Haas2012-04-05
| | | | Greg Smith, Peter Geoghegan, and Robert Haas
* Update obsolete comment.Tom Lane2012-04-05
| | | | | | | | Somebody didn't bother to fix this comment while adding foreign table support to the code below it. In passing, remove the explicit calling-out of relkind letters, which adds complexity to the comment but doesn't help in understanding the code.
* Correctly explain units used by function-timing stats functions.Robert Haas2012-04-05
| | | | | The views are in milliseconds, but the raw functions return microseconds.
* Expose track_iotiming data via the statistics collector.Robert Haas2012-04-05
| | | | | | Ants Aasma's original patch to add timing information for buffer I/O requests exposed this data at the relation level, which was judged too costly. I've here exposed it at the database level instead.
* Fix plpgsql named-cursor-parameter feature for variable name conflicts.Tom Lane2012-04-04
| | | | | | | | | | The parser got confused if a cursor parameter had the same name as a plpgsql variable. Reported and diagnosed by Yeb Havinga, though this isn't exactly his proposed fix. Also, some mostly-but-not-entirely-cosmetic adjustments to the original named-cursor-parameter patch, for code readability and better error diagnostics.
* Improve efficiency of dblink by using libpq's new row processor API.Tom Lane2012-04-04
| | | | | | | | | | | | | | | | | This patch provides a test case for libpq's row processor API. contrib/dblink can deal with very large result sets by dumping them into a tuplestore (which can spill to disk) --- but until now, the intermediate storage of the query result in a PGresult meant memory bloat for any large result. Now we use a row processor to convert the data to tuple form and dump it directly into the tuplestore. A limitation is that this only works for plain dblink() queries, not dblink_send_query() followed by dblink_get_result(). In the latter case we don't know the desired tuple rowtype soon enough. While hack solutions to that are possible, a different user-level API would probably be a better answer. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
* Add a "row processor" API to libpq for better handling of large results.Tom Lane2012-04-04
| | | | | | | | | | | Traditionally libpq has collected an entire query result before passing it back to the application. That provides a simple and transactional API, but it's pretty inefficient for large result sets. This patch allows the application to process each row on-the-fly instead of accumulating the rows into the PGresult. Error recovery becomes a bit more complex, but often that tradeoff is well worth making. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
* Remove useless PGRES_COPY_BOTH "support" in psql.Tom Lane2012-04-04
| | | | | | | | | | | | There is no existing or foreseeable case in which psql should see a PGRES_COPY_BOTH PQresultStatus; and if such a case ever emerges, it's a pretty good bet that these code fragments wouldn't do the right thing anyway. Remove them, and let the existing default cases do the appropriate thing, namely emit an "unexpected PQresultStatus" bleat. Noted while working on libpq row processor patch, for which I was considering adding a PGRES_SUSPENDED status code --- the same default-case treatment would be appropriate for that.
* Fix syslogger to not lose log coherency under high load.Tom Lane2012-04-04
| | | | | | | | | The original coding of the syslogger had an arbitrary limit of 20 large messages concurrently in progress, after which it would just punt and dump message fragments to the output file separately. Our ambitions are a bit higher than that now, so allow the data structure to expand as necessary. Reported and patched by Andrew Dunstan; some editing by Tom