aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
...
* NLS: Use msgmerge/xgettext --no-wrap and --sort-by-filePeter Eisentraut2012-04-05
| | | | | | The option --no-wrap prevents wars with (most?) editors about proper line wrapping. --sort-by-file ensures consistent file order, for easier diffing.
* Allow pg_archivecleanup to strip optional file extensions.Robert Haas2012-04-05
| | | | | Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself. e
* Publish checkpoint timing information to pg_stat_bgwriter.Robert Haas2012-04-05
| | | | Greg Smith, Peter Geoghegan, and Robert Haas
* Update obsolete comment.Tom Lane2012-04-05
| | | | | | | | Somebody didn't bother to fix this comment while adding foreign table support to the code below it. In passing, remove the explicit calling-out of relkind letters, which adds complexity to the comment but doesn't help in understanding the code.
* Correctly explain units used by function-timing stats functions.Robert Haas2012-04-05
| | | | | The views are in milliseconds, but the raw functions return microseconds.
* Expose track_iotiming data via the statistics collector.Robert Haas2012-04-05
| | | | | | Ants Aasma's original patch to add timing information for buffer I/O requests exposed this data at the relation level, which was judged too costly. I've here exposed it at the database level instead.
* Fix plpgsql named-cursor-parameter feature for variable name conflicts.Tom Lane2012-04-04
| | | | | | | | | | The parser got confused if a cursor parameter had the same name as a plpgsql variable. Reported and diagnosed by Yeb Havinga, though this isn't exactly his proposed fix. Also, some mostly-but-not-entirely-cosmetic adjustments to the original named-cursor-parameter patch, for code readability and better error diagnostics.
* Improve efficiency of dblink by using libpq's new row processor API.Tom Lane2012-04-04
| | | | | | | | | | | | | | | | | This patch provides a test case for libpq's row processor API. contrib/dblink can deal with very large result sets by dumping them into a tuplestore (which can spill to disk) --- but until now, the intermediate storage of the query result in a PGresult meant memory bloat for any large result. Now we use a row processor to convert the data to tuple form and dump it directly into the tuplestore. A limitation is that this only works for plain dblink() queries, not dblink_send_query() followed by dblink_get_result(). In the latter case we don't know the desired tuple rowtype soon enough. While hack solutions to that are possible, a different user-level API would probably be a better answer. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
* Add a "row processor" API to libpq for better handling of large results.Tom Lane2012-04-04
| | | | | | | | | | | Traditionally libpq has collected an entire query result before passing it back to the application. That provides a simple and transactional API, but it's pretty inefficient for large result sets. This patch allows the application to process each row on-the-fly instead of accumulating the rows into the PGresult. Error recovery becomes a bit more complex, but often that tradeoff is well worth making. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
* Remove useless PGRES_COPY_BOTH "support" in psql.Tom Lane2012-04-04
| | | | | | | | | | | | There is no existing or foreseeable case in which psql should see a PGRES_COPY_BOTH PQresultStatus; and if such a case ever emerges, it's a pretty good bet that these code fragments wouldn't do the right thing anyway. Remove them, and let the existing default cases do the appropriate thing, namely emit an "unexpected PQresultStatus" bleat. Noted while working on libpq row processor patch, for which I was considering adding a PGRES_SUSPENDED status code --- the same default-case treatment would be appropriate for that.
* Fix syslogger to not lose log coherency under high load.Tom Lane2012-04-04
| | | | | | | | | The original coding of the syslogger had an arbitrary limit of 20 large messages concurrently in progress, after which it would just punt and dump message fragments to the output file separately. Our ambitions are a bit higher than that now, so allow the data structure to expand as necessary. Reported and patched by Andrew Dunstan; some editing by Tom
* Fix a couple of contrib/dblink bugs.Tom Lane2012-04-03
| | | | | | | | | | | | | | | dblink_exec leaked temporary database connections if any error occurred after connection setup, for example SELECT dblink_exec('...connect string...', 'select 1/0'); Add a PG_TRY block to ensure PQfinish gets done when it is needed. (dblink_record_internal is on the hairy edge of needing similar treatment, but seems not to be actively broken at the moment.) Also, in 9.0 and up, only one of the three functions using tuplestore return mode was properly checking that the query context would allow a tuplestore result. Noted while reviewing dblink patch. Back-patch to all supported branches.
* Arrange for on_exit_nicely to be thread-safe.Robert Haas2012-04-03
| | | | | Extracted from Joachim Wieland's parallel pg_dump patch, with some additional comments by me.
* Add support for renaming domain constraintsPeter Eisentraut2012-04-03
|
* NLS: Seed Language field in PO headerPeter Eisentraut2012-04-02
| | | | | Use msgmerge --lang option to seed the Language field, recently introduced by gettext, in the header of the new PO file.
* Fix recently introduced typo in NLS file listsPeter Eisentraut2012-04-02
|
* Fix O(N^2) behavior in pg_dump when many objects are in dependency loops.Tom Lane2012-03-31
| | | | | | | | | | | | | | Combining the loop workspace with the record of already-processed objects might have been a cute trick, but it behaves horridly if there are many dependency loops to repair: the time spent in the first step of findLoop() grows as O(N^2). Instead use a separate flag array indexed by dump ID, which we can check in constant time. The length of the workspace array is now never more than the actual length of a dependency chain, which should be reasonably short in all cases of practical interest. The code is noticeably easier to understand this way, too. Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
* Fix O(N^2) behavior in pg_dump for large numbers of owned sequences.Tom Lane2012-03-31
| | | | | | | | | | | | The loop that matched owned sequences to their owning tables required time proportional to number of owned sequences times number of tables; although this work was only expended in selective-dump situations, which is probably why the issue wasn't recognized long since. Refactor slightly so that we can perform this work after the index array for findTableByOid has been set up, reducing the time to O(M log N). Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
* Rename frontend keyword arrays to avoid conflict with backend.Tom Lane2012-03-31
| | | | | | | | | | | | | ecpg and pg_dump each contain keyword arrays with structure similar to the backend's keyword array. Up to now, we actually named those arrays the same as the backend's and relied on parser/keywords.h to declare them. This seems a tad too cute, though, and it breaks now that we need to PGDLLIMPORT-decorate the backend symbols. Rename to avoid the problem. Per buildfarm. (It strikes me that maybe we should get rid of the separate keywords.c files altogether, and just define these arrays in the modules that use them, but that's a rather more invasive change.)
* Fix glitch recently introduced in psql tab completion.Tom Lane2012-03-31
| | | | | | Over-optimization (by me, looks like :-() broke the case of recognizing a word boundary just before a quoted identifier. Reported and diagnosed by Dean Rasheed.
* Add PGDLLIMPORT to ScanKeywords and NumScanKeywords.Tom Lane2012-03-31
| | | | Per buildfarm, this is now needed by contrib/pg_stat_statements.
* Add new files to NLS file listsPeter Eisentraut2012-03-30
| | | | | | Some of these are newly added, some are older and were forgotten, some don't contain any translatable strings right now but look like they could in the future.
* Replace printf format %i by %dPeter Eisentraut2012-03-30
| | | | see also ce8d7bb6440710058503d213b2aafcdf56a5b481
* pgxs: Supply default values for BISON and FLEX variablesPeter Eisentraut2012-03-30
| | | | | | Otherwise, the availability of these variables depends on what happened to be available at the time the PostgreSQL build was configured.
* pg_test_timing: Lame hack to work around compiler warning.Robert Haas2012-03-30
| | | | | Fujii Masao, plus a comment by me. While I'm at it, correctly tabify this chunk of code.
* Fix dblink's failure to report correct connection name in error messages.Tom Lane2012-03-29
| | | | | | | | | | | | | | The DBLINK_GET_CONN and DBLINK_GET_NAMED_CONN macros did not set the surrounding function's conname variable, causing errors to be incorrectly reported as having occurred on the "unnamed" connection in some cases. This bug was actually visible in two cases in the regression tests, but apparently whoever added those cases wasn't paying attention. Noted by Kyotaro Horiguchi, though this is different from his proposed patch. Back-patch to 8.4; 8.3 does not have the same type of error reporting so the patch is not relevant.
* Improve contrib/pg_stat_statements' handling of PREPARE/EXECUTE statements.Tom Lane2012-03-29
| | | | | | | | | | | | | | | | | | | | It's actually more useful for the module to ignore these. Ignoring EXECUTE (and not incrementing the nesting level) allows the executor hooks to charge the time to the underlying prepared query, which shows up as a stats entry with the original PREPARE as query string (possibly modified by suppression of constants, which might not be terribly useful here but it's not worth avoiding). This is much more useful than cluttering the stats table with a distinct entry for each textually distinct EXECUTE. Experimentation with this idea shows that it's also preferable to ignore PREPARE. If we don't, we get two stats table entries, one with the query string hash and one with the jumble-derived hash, but with the same visible query string (modulo those constants). This is confusing and not very helpful, since the first entry will only receive costs associated with initial planning of the query, which is not something counted at all normally by pg_stat_statements. (And if we do start tracking planning costs, we'd want them blamed on the other hash table entry anyway.)
* Improve handling of utility statements containing plannable statements.Tom Lane2012-03-29
| | | | | | | | | | | | | | | | | | | | | | | | When tracking nested statements, contrib/pg_stat_statements formerly double-counted the execution costs of utility statements that directly contain an executable statement, such as EXPLAIN and DECLARE CURSOR. This was not obvious since the ProcessUtility and Executor hooks would each add their measured costs to the same stats table entry. However, with the new implementation that hashes utility and plannable statements differently, this showed up as seemingly-duplicate stats entries. Fix that by disabling the Executor hooks when the query has a queryId of zero, which was the case already for such statements but is now more clearly specified in the code. (The zero queryId was causing problems anyway because all such statements would add to a single bogus entry.) The PREPARE/EXECUTE case still results in counting the same execution in two different stats table entries, but it should be much less surprising to users that there are two entries in such cases. In passing, include a CommonTableExpr's ctename in the query hash. I had left it out originally on the grounds that we wanted to omit all inessential aliases, but since RTE_CTE RTEs are hashing their referenced names, we'd better hash the CTE names too to make sure we don't hash semantically different queries the same.
* initdb: Mark more messages for translationPeter Eisentraut2012-03-29
| | | | | | | Some Windows-only messages had apparently been forgotten so far. Also make the wording of the messages more consistent with similar messages other parts, such as pg_ctl and pg_regress.
* Correct epoch of txid_current() when executed on a Hot Standby server.Simon Riggs2012-03-29
| | | | | | | | | Initialise ckptXidEpoch from starting checkpoint and maintain the correct value as we roll forwards. This allows GetNextXidAndEpoch() to return the correct epoch when executed during recovery. Backpatch to 9.0 when the problem is first observable by a user. Bug report from Daniel Farina
* Unbreak Windows builds broken by pgpipe removal.Andrew Dunstan2012-03-29
|
* Inherit max_safe_fds to child processes in EXEC_BACKEND mode.Heikki Linnakangas2012-03-29
| | | | | | | | | | | | | | Postmaster sets max_safe_fds by testing how many open file descriptors it can open, and that is normally inherited by all child processes at fork(). Not so on EXEC_BACKEND, ie. Windows, however. Because of that, we effectively ignored max_files_per_process on Windows, and always assumed a conservative default of 32 simultaneous open files. That could have an impact on performance, if you need to access a lot of different files in a query. After this patch, the value is passed to child processes by save/restore_backend_variables() among many other global variables. It has been like this forever, but given the lack of complaints about it, I'm not backpatching this.
* Remove now redundant pgpipe code.Andrew Dunstan2012-03-28
|
* Improve contrib/pg_stat_statements to lump "similar" queries together.Tom Lane2012-03-28
| | | | | | | | | | | | | | | | | | | | | | | | pg_stat_statements now hashes selected fields of the analyzed parse tree to assign a "fingerprint" to each query, and groups all queries with the same fingerprint into a single entry in the pg_stat_statements view. In practice it is expected that queries with the same fingerprint will be equivalent except for values of literal constants. To make the display more useful, such constants are replaced by "?" in the displayed query strings. This mechanism currently supports only optimizable queries (SELECT, INSERT, UPDATE, DELETE). Utility commands are still matched on the basis of their literal query strings. There remain some open questions about how to deal with utility statements that contain optimizable queries (such as EXPLAIN and SELECT INTO) and how to deal with expiring speculative hashtable entries that are made to save the normalized form of a query string. However, fixing these issues should require only localized changes, and since there are other open patches involving contrib/pg_stat_statements, it seems best to go ahead and commit what we've got. Peter Geoghegan, reviewed by Daniel Farina
* Run maintainer-check on all PO files, not only configured onesPeter Eisentraut2012-03-28
| | | | | | The intent is to allow configure --enable-nls=xx for installation speed and size, but have maintainer-check check all source files regardless.
* Tweak markup to avoid extra whitespace in man pagesPeter Eisentraut2012-03-28
|
* Attempt to unbreak pg_test_timing on Windows.Robert Haas2012-03-28
| | | | Per buildfarm, and Álvaro Herrera.
* pg_basebackup: Error handling fixes.Robert Haas2012-03-28
| | | | Thomas Ogrisegg and Fujii Masao
* pg_basebackup: Error message improvements.Robert Haas2012-03-28
| | | | Fujii Masao
* Doc fix for pg_test_timing.Robert Haas2012-03-28
| | | | Fujii Masao
* pg_test_timing utility, to measure clock monotonicity and timing cost.Robert Haas2012-03-27
| | | | Ants Aasma, Greg Smith
* Expose track_iotiming information via pg_stat_statements.Robert Haas2012-03-27
| | | | Ants Aasma, reviewed by Greg Smith, with very minor tweaks by me.
* Bend parse location rules for the convenience of pg_stat_statements.Tom Lane2012-03-27
| | | | | | | | | | | | | | | | Generally, the parse location assigned to a multiple-token construct is the location of its leftmost token. This commit breaks that rule for the syntaxes TYPENAME 'LITERAL' and CAST(CONSTANT AS TYPENAME) --- the resulting Const will have the location of the literal string, not the typename or CAST keyword. The cases where this matters are pretty thin on the ground (no error messages in the regression tests change, for example), and it's unlikely that any user would be confused anyway by an error cursor pointing at the literal. But still it's less than consistent. The reason for changing it is that contrib/pg_stat_statements wants to know the parse location of the original literal, and it was agreed that this is the least unpleasant way to preserve that information through parse analysis. Peter Geoghegan
* Add some infrastructure for contrib/pg_stat_statements.Tom Lane2012-03-27
| | | | | | | | | | | | | | | | | | | | Add a queryId field to Query and PlannedStmt. This is not used by the core backend, except for being copied around at appropriate times. It's meant to allow plug-ins to track a particular query forward from parse analysis to execution. The queryId is intentionally not dumped into stored rules (and hence this commit doesn't bump catversion). You could argue that choice either way, but it seems better that stored rule strings not have any dependency on plug-ins that might or might not be present. Also, add a post_parse_analyze_hook that gets invoked at the end of parse analysis (but only for top-level analysis of complete queries, not cases such as analyzing a domain's default-value expression). This is mainly meant to be used to compute and assign a queryId, but it could have other applications. Peter Geoghegan
* New GUC, track_iotiming, to track I/O timings.Robert Haas2012-03-27
| | | | | | | | Currently, the only way to see the numbers this gathers is via EXPLAIN (ANALYZE, BUFFERS), but the plan is to add visibility through the stats collector and pg_stat_statements in subsequent patches. Ants Aasma, reviewed by Greg Smith, with some further changes by me.
* Silence compiler warning about uninitialized variable.Tom Lane2012-03-27
|
* pg_dump: Small message adjustment for consistencyPeter Eisentraut2012-03-27
|
* Improve PL/Python database access function documentationPeter Eisentraut2012-03-26
| | | | | Organize the function descriptions as a list instead of running text, for easier access.
* Remove dead assignmentPeter Eisentraut2012-03-26
| | | | found by Coverity
* Code cleanup for heap_freeze_tuple.Robert Haas2012-03-26
| | | | | | | It used to be case that lazy vacuum could call this function with only a shared lock on the buffer, but neither lazy vacuum nor any other code path does that any more. Simplify the code accordingly and clean up some related, obsolete comments.