aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Update obsolete comment.Tom Lane2012-04-05
| | | | | | | | Somebody didn't bother to fix this comment while adding foreign table support to the code below it. In passing, remove the explicit calling-out of relkind letters, which adds complexity to the comment but doesn't help in understanding the code.
* Expose track_iotiming data via the statistics collector.Robert Haas2012-04-05
| | | | | | Ants Aasma's original patch to add timing information for buffer I/O requests exposed this data at the relation level, which was judged too costly. I've here exposed it at the database level instead.
* Fix plpgsql named-cursor-parameter feature for variable name conflicts.Tom Lane2012-04-04
| | | | | | | | | | The parser got confused if a cursor parameter had the same name as a plpgsql variable. Reported and diagnosed by Yeb Havinga, though this isn't exactly his proposed fix. Also, some mostly-but-not-entirely-cosmetic adjustments to the original named-cursor-parameter patch, for code readability and better error diagnostics.
* Add a "row processor" API to libpq for better handling of large results.Tom Lane2012-04-04
| | | | | | | | | | | Traditionally libpq has collected an entire query result before passing it back to the application. That provides a simple and transactional API, but it's pretty inefficient for large result sets. This patch allows the application to process each row on-the-fly instead of accumulating the rows into the PGresult. Error recovery becomes a bit more complex, but often that tradeoff is well worth making. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
* Remove useless PGRES_COPY_BOTH "support" in psql.Tom Lane2012-04-04
| | | | | | | | | | | | There is no existing or foreseeable case in which psql should see a PGRES_COPY_BOTH PQresultStatus; and if such a case ever emerges, it's a pretty good bet that these code fragments wouldn't do the right thing anyway. Remove them, and let the existing default cases do the appropriate thing, namely emit an "unexpected PQresultStatus" bleat. Noted while working on libpq row processor patch, for which I was considering adding a PGRES_SUSPENDED status code --- the same default-case treatment would be appropriate for that.
* Fix syslogger to not lose log coherency under high load.Tom Lane2012-04-04
| | | | | | | | | The original coding of the syslogger had an arbitrary limit of 20 large messages concurrently in progress, after which it would just punt and dump message fragments to the output file separately. Our ambitions are a bit higher than that now, so allow the data structure to expand as necessary. Reported and patched by Andrew Dunstan; some editing by Tom
* Arrange for on_exit_nicely to be thread-safe.Robert Haas2012-04-03
| | | | | Extracted from Joachim Wieland's parallel pg_dump patch, with some additional comments by me.
* Add support for renaming domain constraintsPeter Eisentraut2012-04-03
|
* NLS: Seed Language field in PO headerPeter Eisentraut2012-04-02
| | | | | Use msgmerge --lang option to seed the Language field, recently introduced by gettext, in the header of the new PO file.
* Fix recently introduced typo in NLS file listsPeter Eisentraut2012-04-02
|
* Fix O(N^2) behavior in pg_dump when many objects are in dependency loops.Tom Lane2012-03-31
| | | | | | | | | | | | | | Combining the loop workspace with the record of already-processed objects might have been a cute trick, but it behaves horridly if there are many dependency loops to repair: the time spent in the first step of findLoop() grows as O(N^2). Instead use a separate flag array indexed by dump ID, which we can check in constant time. The length of the workspace array is now never more than the actual length of a dependency chain, which should be reasonably short in all cases of practical interest. The code is noticeably easier to understand this way, too. Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
* Fix O(N^2) behavior in pg_dump for large numbers of owned sequences.Tom Lane2012-03-31
| | | | | | | | | | | | The loop that matched owned sequences to their owning tables required time proportional to number of owned sequences times number of tables; although this work was only expended in selective-dump situations, which is probably why the issue wasn't recognized long since. Refactor slightly so that we can perform this work after the index array for findTableByOid has been set up, reducing the time to O(M log N). Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
* Rename frontend keyword arrays to avoid conflict with backend.Tom Lane2012-03-31
| | | | | | | | | | | | | ecpg and pg_dump each contain keyword arrays with structure similar to the backend's keyword array. Up to now, we actually named those arrays the same as the backend's and relied on parser/keywords.h to declare them. This seems a tad too cute, though, and it breaks now that we need to PGDLLIMPORT-decorate the backend symbols. Rename to avoid the problem. Per buildfarm. (It strikes me that maybe we should get rid of the separate keywords.c files altogether, and just define these arrays in the modules that use them, but that's a rather more invasive change.)
* Fix glitch recently introduced in psql tab completion.Tom Lane2012-03-31
| | | | | | Over-optimization (by me, looks like :-() broke the case of recognizing a word boundary just before a quoted identifier. Reported and diagnosed by Dean Rasheed.
* Add PGDLLIMPORT to ScanKeywords and NumScanKeywords.Tom Lane2012-03-31
| | | | Per buildfarm, this is now needed by contrib/pg_stat_statements.
* Add new files to NLS file listsPeter Eisentraut2012-03-30
| | | | | | Some of these are newly added, some are older and were forgotten, some don't contain any translatable strings right now but look like they could in the future.
* Replace printf format %i by %dPeter Eisentraut2012-03-30
| | | | see also ce8d7bb6440710058503d213b2aafcdf56a5b481
* pgxs: Supply default values for BISON and FLEX variablesPeter Eisentraut2012-03-30
| | | | | | Otherwise, the availability of these variables depends on what happened to be available at the time the PostgreSQL build was configured.
* initdb: Mark more messages for translationPeter Eisentraut2012-03-29
| | | | | | | Some Windows-only messages had apparently been forgotten so far. Also make the wording of the messages more consistent with similar messages other parts, such as pg_ctl and pg_regress.
* Correct epoch of txid_current() when executed on a Hot Standby server.Simon Riggs2012-03-29
| | | | | | | | | Initialise ckptXidEpoch from starting checkpoint and maintain the correct value as we roll forwards. This allows GetNextXidAndEpoch() to return the correct epoch when executed during recovery. Backpatch to 9.0 when the problem is first observable by a user. Bug report from Daniel Farina
* Unbreak Windows builds broken by pgpipe removal.Andrew Dunstan2012-03-29
|
* Inherit max_safe_fds to child processes in EXEC_BACKEND mode.Heikki Linnakangas2012-03-29
| | | | | | | | | | | | | | Postmaster sets max_safe_fds by testing how many open file descriptors it can open, and that is normally inherited by all child processes at fork(). Not so on EXEC_BACKEND, ie. Windows, however. Because of that, we effectively ignored max_files_per_process on Windows, and always assumed a conservative default of 32 simultaneous open files. That could have an impact on performance, if you need to access a lot of different files in a query. After this patch, the value is passed to child processes by save/restore_backend_variables() among many other global variables. It has been like this forever, but given the lack of complaints about it, I'm not backpatching this.
* Remove now redundant pgpipe code.Andrew Dunstan2012-03-28
|
* Run maintainer-check on all PO files, not only configured onesPeter Eisentraut2012-03-28
| | | | | | The intent is to allow configure --enable-nls=xx for installation speed and size, but have maintainer-check check all source files regardless.
* Attempt to unbreak pg_test_timing on Windows.Robert Haas2012-03-28
| | | | Per buildfarm, and Álvaro Herrera.
* pg_basebackup: Error handling fixes.Robert Haas2012-03-28
| | | | Thomas Ogrisegg and Fujii Masao
* pg_basebackup: Error message improvements.Robert Haas2012-03-28
| | | | Fujii Masao
* Bend parse location rules for the convenience of pg_stat_statements.Tom Lane2012-03-27
| | | | | | | | | | | | | | | | Generally, the parse location assigned to a multiple-token construct is the location of its leftmost token. This commit breaks that rule for the syntaxes TYPENAME 'LITERAL' and CAST(CONSTANT AS TYPENAME) --- the resulting Const will have the location of the literal string, not the typename or CAST keyword. The cases where this matters are pretty thin on the ground (no error messages in the regression tests change, for example), and it's unlikely that any user would be confused anyway by an error cursor pointing at the literal. But still it's less than consistent. The reason for changing it is that contrib/pg_stat_statements wants to know the parse location of the original literal, and it was agreed that this is the least unpleasant way to preserve that information through parse analysis. Peter Geoghegan
* Add some infrastructure for contrib/pg_stat_statements.Tom Lane2012-03-27
| | | | | | | | | | | | | | | | | | | | Add a queryId field to Query and PlannedStmt. This is not used by the core backend, except for being copied around at appropriate times. It's meant to allow plug-ins to track a particular query forward from parse analysis to execution. The queryId is intentionally not dumped into stored rules (and hence this commit doesn't bump catversion). You could argue that choice either way, but it seems better that stored rule strings not have any dependency on plug-ins that might or might not be present. Also, add a post_parse_analyze_hook that gets invoked at the end of parse analysis (but only for top-level analysis of complete queries, not cases such as analyzing a domain's default-value expression). This is mainly meant to be used to compute and assign a queryId, but it could have other applications. Peter Geoghegan
* New GUC, track_iotiming, to track I/O timings.Robert Haas2012-03-27
| | | | | | | | Currently, the only way to see the numbers this gathers is via EXPLAIN (ANALYZE, BUFFERS), but the plan is to add visibility through the stats collector and pg_stat_statements in subsequent patches. Ants Aasma, reviewed by Greg Smith, with some further changes by me.
* pg_dump: Small message adjustment for consistencyPeter Eisentraut2012-03-27
|
* Remove dead assignmentPeter Eisentraut2012-03-26
| | | | found by Coverity
* Code cleanup for heap_freeze_tuple.Robert Haas2012-03-26
| | | | | | | It used to be case that lazy vacuum could call this function with only a shared lock on the buffer, but neither lazy vacuum nor any other code path does that any more. Simplify the code accordingly and clean up some related, obsolete comments.
* Fix COPY FROM for null marker strings that correspond to invalid encoding.Tom Lane2012-03-25
| | | | | | | | | | | | | The COPY documentation says "COPY FROM matches the input against the null string before removing backslashes". It is therefore reasonable to presume that null markers like E'\\0' will work ... and they did, until someone put the tests in the wrong order during microoptimization-driven rewrites. Since then, we've been failing if the null marker is something that would de-escape to an invalidly-encoded string. Since null markers generally need to be something that can't appear in the data, this represents a nontrivial loss of functionality; surprising nobody noticed it earlier. Per report from Jeff Davis. Backpatch to 8.4 where this got broken.
* Replace empty locale name with implied value in CREATE DATABASE and initdb.Tom Lane2012-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | setlocale() accepts locale name "" as meaning "the locale specified by the process's environment variables". Historically we've accepted that for Postgres' locale settings, too. However, it's fairly unsafe to store an empty string in a new database's pg_database.datcollate or datctype fields, because then the interpretation could vary across postmaster restarts, possibly resulting in index corruption and other unpleasantness. Instead, we should expand "" to whatever it means at the moment of calling CREATE DATABASE, which we can do by saving the value returned by setlocale(). For consistency, make initdb set up the initial lc_xxx parameter values the same way. initdb was already doing the right thing for empty locale names, but it did not replace non-empty names with setlocale results. On a platform where setlocale chooses to canonicalize the spellings of locale names, this would result in annoying inconsistency. (It seems that popular implementations of setlocale don't do such canonicalization, which is a pity, but the POSIX spec certainly allows it to be done.) The same risk of inconsistency leads me to not venture back-patching this, although it could certainly be seen as a longstanding bug. Per report from Jeff Davis, though this is not his proposed patch.
* Fix planner's handling of outer PlaceHolderVars within subqueries.Tom Lane2012-03-24
| | | | | | | | | | | | | | | | | | | | | | | For some reason, in the original coding of the PlaceHolderVar mechanism I had supposed that PlaceHolderVars couldn't propagate into subqueries. That is of course entirely possible. When it happens, we need to treat an outer-level PlaceHolderVar much like an outer Var or Aggref, that is SS_replace_correlation_vars() needs to replace the PlaceHolderVar with a Param, and then when building the finished SubPlan we have to provide the PlaceHolderVar expression as an actual parameter for the SubPlan. The handling of the contained expression is a bit delicate but it can be treated exactly like an Aggref's expression. In addition to the missing logic in subselect.c, prepjointree.c was failing to search subqueries for PlaceHolderVars that need their relids adjusted during subquery pullup. It looks like everyplace else that touches PlaceHolderVars got it right, though. Per report from Mark Murawski. In 9.1 and HEAD, queries affected by this oversight would fail with "ERROR: Upper-level PlaceHolderVar found where not expected". But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should.
* Cast some printf arguments to avoid possibly-nonportable behavior.Tom Lane2012-03-23
| | | | Per compiler warnings on buildfarm member black_firefly.
* Refactor simplify_function et al to centralize argument simplification.Tom Lane2012-03-23
| | | | | | | | | | We were doing the recursive simplification of function/operator arguments in half a dozen different places, with rather baroque logic to ensure it didn't get done multiple times on some arguments. This patch improves that by postponing argument simplification until after we've dealt with named parameters and added any needed default expressions. Marti Raudsepp, somewhat hacked on by me
* Code review for protransform patches.Tom Lane2012-03-23
| | | | | | | | | | | | | | | | | Fix loss of previous expression-simplification work when a transform function fires: we must not simply revert to untransformed input tree. Instead build a dummy FuncExpr node to pass to the transform function. This has the additional advantage of providing a simpler, more uniform API for transform functions. Move documentation to a somewhat less buried spot, relocate some poorly-placed code, be more wary of null constants and invalid typmod values, add an opr_sanity check on protransform function signatures, and some other minor cosmetic adjustments. Note: although this patch touches pg_proc.h, no need for catversion bump, because the changes are cosmetic and don't actually change the intended catalog contents.
* Fix GET DIAGNOSTICS for case of assignment to function's first variable.Tom Lane2012-03-22
| | | | | | | | | | | An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag() caused the code to treat an assignment to a variable with dno zero as a no-op. Unfortunately, that's a perfectly valid dno. This has been broken since GET DIAGNOSTICS was invented. It's not terribly surprising that the bug went unnoticed for so long, since in most cases you probably wouldn't use the function's first-created variable (normally its first parameter) as a GET DIAGNOSTICS target. Nonetheless, it's broken. Per bug #6551 from Adam Buraczewski.
* Refactor to eliminate duplicate copies of conninfo default-finding code.Tom Lane2012-03-22
| | | | Alex Shulgin, lightly edited by me
* If a role has a password expiration date, show that in psql's \du output.Tom Lane2012-03-22
| | | | | | | | | Per a suggestion from Euler Taveira, it seems like a good idea to include this information in \du (and \dg) output. This costs nothing for people who are not using the VALID UNTIL feature, while for those who are, it's rather critical information. Fabrízio de Royes Mello
* Clean up compiler warnings from unused variables with asserts disabledPeter Eisentraut2012-03-21
| | | | | | For those variables only used when asserts are enabled, use a new macro PG_USED_FOR_ASSERTS_ONLY, which expands to __attribute__((unused)) when asserts are not enabled.
* Add installing entab to pgindent instructionsPeter Eisentraut2012-03-21
| | | | And minor other pgindent documentation tweaks.
* Allow new relmapper entries when allow_system_table_mods is true.Tom Lane2012-03-21
| | | | | | | | | | This restores the pre-9.0 situation that it's possible to add new indexes on pg_class and other mapped-but-not-shared catalogs, so long as you broke the glass and flipped the big red Dont-Touch-Me switch. As before, there are a lot of gotchas, and you'd have to be pretty desperate to try this on a production database; but there doesn't seem to be a reason for relmapper.c to be preventing such things all by itself. Per experimentation with a case suggested by Cody Cutrer.
* Improve connectMaintenanceDatabase() error reporting.Robert Haas2012-03-21
| | | | | | | | The prior coding instructs the user to pick an alternative maintenance database, but this is overly clever, since it obscures whatever the real cause of the failure is. Josh Kupershmidt
* Add some CHECK_FOR_INTERRUPTS() calls to the heap-sort call path.Robert Haas2012-03-20
| | | | | | | I broke this in commit 337b6f5ecf05b21b5e997986884d097d60e4e3d0, which among other things arranged for quicksorts to CHECK_FOR_INTERRUPTS() slightly less frequently. Sadly, it also arranged for heapsorts to CHECK_FOR_INTERRUPTS() much less frequently. Repair.
* pg_dump: get rid of die_horriblyAlvaro Herrera2012-03-20
| | | | | | | | | | | | | | | | | | The old code was using exit_horribly or die_horribly other depending on whether it had an ArchiveHandle on which to close the connection or not; but there were places that were passing a NULL ArchiveHandle to die_horribly, and other places that used exit_horribly while having an AH available. So there wasn't all that much consistency. Improve the situation by keeping only one of the routines, and instead of having to pass the AH down from the caller, arrange for it to be present for an on_exit_nicely callback to operate on. Author: Joachim Wieland Some tweaks by me Per a suggestion from Robert Haas, in the ongoing "parallel pg_dump" saga.
* pg_dump: Remove undocumented "files" output formatPeter Eisentraut2012-03-20
| | | | | | | This was for demonstration only, and now it was creating compiler warnings from zlib without an obvious fix (see also d923125b77c5d698bb8107a533a21627582baa43), let's just remove it. The "directory" format is presumably similar enough anyway.
* Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.Tom Lane2012-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Making this operation look like a utility statement seems generally a good idea, and particularly so in light of the desire to provide command triggers for utility statements. The original choice of representing it as SELECT with an IntoClause appendage had metastasized into rather a lot of places, unfortunately, so that this patch is a great deal more complicated than one might at first expect. In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS subcommands required restructuring some EXPLAIN-related APIs. Add-on code that calls ExplainOnePlan or ExplainOneUtility, or uses ExplainOneQuery_hook, will need adjustment. Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO, which formerly were accepted though undocumented, are no longer accepted. The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE. The CREATE RULE case doesn't seem to have much real-world use (since the rule would work only once before failing with "table already exists"), so we'll not bother with that one. Both SELECT INTO and CREATE TABLE AS still return a command tag of "SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn", but for the moment backwards compatibility wins the day. Andres Freund and Tom Lane