aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix PlanRowMark/ExecRowMark structures to handle inheritance correctly.Tom Lane2011-01-12
| | | | | | | | | | | | | | | | | | In an inherited UPDATE/DELETE, each target table has its own subplan, because it might have a column set different from other targets. This means that the resjunk columns we add to support EvalPlanQual might be at different physical column numbers in each subplan. The EvalPlanQual rewrite I did for 9.0 failed to account for this, resulting in possible misbehavior or even crashes during concurrent updates to the same row, as seen in a recent report from Gordon Shannon. Revise the data structure so that we track resjunk column numbers separately for each subplan. I also chose to move responsibility for identifying the physical column numbers back to executor startup, instead of assuming that numbers derived during preprocess_targetlist would stay valid throughout subsequent massaging of the plan. That's a bit slower, so we might want to consider undoing it someday; but it would complicate the patch considerably and didn't seem justifiable in a bug fix that has to be back-patched to 9.0.
* Avoid unexpected conversion overflow in planner for distant date values.Tom Lane2010-12-28
| | | | | | | | | | | | | | The "date" type supports a wider range of dates than int64 timestamps do. However, there is pre-int64-timestamp code in the planner that assumes that all date values can be converted to timestamp with impunity. Fortunately, what we really need out of the conversion is always a double (float8) value; so even when the date is out of timestamp's range it's possible to produce a sane answer. All we need is a code path that doesn't try to force the result into int64. Per trouble report from David Rericha. Back-patch to all supported versions. Although this is surely a corner case, there's not much point in advertising a date range wider than timestamp's if we will choke on such values in unexpected places.
* Fix up handling of simple-form CASE with constant test expression.Tom Lane2010-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | eval_const_expressions() can replace CaseTestExprs with constants when the surrounding CASE's test expression is a constant. This confuses ruleutils.c's heuristic for deparsing simple-form CASEs, leading to Assert failures or "unexpected CASE WHEN clause" errors. I had put in a hack solution for that years ago (see commit 514ce7a331c5bea8e55b106d624e55732a002295 of 2006-10-01), but bug #5794 from Peter Speck shows that that solution failed to cover all cases. Fortunately, there's a much better way, which came to me upon reflecting that Peter's "CASE TRUE WHEN" seemed pretty redundant: we can "simplify" the simple-form CASE to the general form of CASE, by simply omitting the constant test expression from the rebuilt CASE construct. This is intuitively valid because there is no need for the executor to evaluate the test expression at runtime; it will never be referenced, because any CaseTestExprs that would have referenced it are now replaced by constants. This won't save a whole lot of cycles, since evaluating a Const is pretty cheap, but a cycle saved is a cycle earned. In any case it beats kluging ruleutils.c still further. So this patch improves const-simplification and reverts the previous change in ruleutils.c. Back-patch to all supported branches. The bug exists in 8.1 too, but it's out of warranty.
* Fix erroneous parsing of tsquery input "... & !(subexpression) | ..."Tom Lane2010-12-19
| | | | | | | | | | | After parsing a parenthesized subexpression, we must pop all pending ANDs and NOTs off the stack, just like the case for a simple operand. Per bug #5793. Also fix clones of this routine in contrib/intarray and contrib/ltree, where input of types query_int and ltxtquery had the same problem. Back-patch to all supported versions.
* Remove optreset from src/port/ implementations of getopt and getopt_long.Tom Lane2010-12-16
| | | | | | | | | | We don't actually need optreset, because we can easily fix the code to ensure that it's cleanly restartable after having completed a scan over the argv array; which is the only case we need to restart in. Getting rid of it avoids a class of interactions with the system libraries and allows reversion of my change of yesterday in postmaster.c and postgres.c. Back-patch to 8.4. Before that the getopt code was a bit different anyway.
* Fix up getopt() reset management so it works on recent mingw.Tom Lane2010-12-15
| | | | | | | | | The mingw people don't appear to care about compatibility with non-GNU versions of getopt, so force use of our own copy of getopt on Windows. Also, ensure that we make use of optreset when using our own copy. Per report from Andrew Dunstan. Back-patch to all versions supported on Windows.
* Translation updates for release 9.0.2Peter Eisentraut2010-12-13
|
* Fix efficiency problems in tuplestore_trim().Tom Lane2010-12-10
| | | | | | | | | | | | | | | | | | | | | | The original coding in tuplestore_trim() was only meant to work efficiently in cases where each trim call deleted most of the tuples in the store. Which, in fact, was the pattern of the original usage with a Material node supporting mark/restore operations underneath a MergeJoin. However, WindowAgg now uses tuplestores and it has considerably less friendly trimming behavior. In particular it can attempt to trim one tuple at a time off a large tuplestore. tuplestore_trim() had O(N^2) runtime in this situation because of repeatedly shifting its tuple pointer array. Fix by avoiding shifting the array until a reasonably large number of tuples have been deleted. This can waste some pointer space, but we do still reclaim the tuples themselves, so the percentage wastage should be pretty small. Per Jie Li's report of slow percent_rank() evaluation. cume_dist() and ntile() would certainly be affected as well, along with any other window function that has a moving frame start and requires reading substantially ahead of the current row. Back-patch to 8.4, where window functions were introduced. There's no need to tweak it before that.
* Reduce spurious Hot Standby conflicts from never-visible records.Simon Riggs2010-12-10
| | | | | | | | | | | | | Hot Standby conflicts only with tuples that were visible at some point. So ignore tuples from aborted transactions or for tuples updated/deleted during the inserting transaction when generating the conflict transaction ids. Following detailed analysis and test case by Noah Misch. Original report covered btree delete records, correctly observed by Heikki Linnakangas that this applies to other cases also. Fix covers all sources of cleanup records via common code. Includes additional fix compared to commit on HEAD
* Force default wal_sync_method to be fdatasync on Linux.Tom Lane2010-12-08
| | | | | | | | | | | | | | | | | | | | | | | Recent versions of the Linux system header files cause xlogdefs.h to believe that open_datasync should be the default sync method, whereas formerly fdatasync was the default on Linux. open_datasync is a bad choice, first because it doesn't actually outperform fdatasync (in fact the reverse), and second because we try to use O_DIRECT with it, causing failures on certain filesystems (e.g., ext4 with data=journal option). This part of the patch is largely per a proposal from Marti Raudsepp. More extensive changes are likely to follow in HEAD, but this is as much change as we want to back-patch. Also clean up confusing code and incorrect documentation surrounding the fsync_writethrough option. Those changes shouldn't result in any actual behavioral change, but I chose to back-patch them anyway to keep the branches looking similar in this area. In 9.0 and HEAD, also do some copy-editing on the WAL Reliability documentation section. Back-patch to all supported branches, since any of them might get used on modern Linux versions.
* Fix bugs in the hot standby known-assigned-xids tracking logic. If there'sHeikki Linnakangas2010-12-07
| | | | | | | | | | | | | | | | | | | | | | | an old transaction running in the master, and a lot of transactions have started and finished since, and a WAL-record is written in the gap between the creating the running-xacts snapshot and WAL-logging it, recovery will fail with "too many KnownAssignedXids" error. This bug was reported by Joachim Wieland on Nov 19th. In the same scenario, when fewer transactions have started so that all the xids fit in KnownAssignedXids despite the first bug, a more serious bug arises. We incorrectly initialize the clog code with the oldest still running transaction, and when we see the WAL record belonging to a transaction with an XID larger than one that committed already before the checkpoint we're recovering from, we zero the clog page containing the already committed transaction, leading to data loss. In hindsight, trying to track xids in the known-assigned-xids array before seeing the running-xacts record was too complicated. To fix that, hold XidGenLock while the running-xacts snapshot is taken and WAL-logged. That ensures that no transaction can begin or end in that gap, so that in recvoery we know that the snapshot contains all transactions running at that point in WAL.
* Add a stack overflow check to copyObject().Tom Lane2010-12-06
| | | | | | | | | | | | | | | There are some code paths, such as SPI_execute(), where we invoke copyObject() on raw parse trees before doing parse analysis on them. Since the bison grammar is capable of building heavily nested parsetrees while itself using only minimal stack depth, this means that copyObject() can be the front-line function that hits stack overflow before anything else does. Accordingly, it had better have a check_stack_depth() call. I did a bit of performance testing and found that this slows down copyObject() by only a few percent, so the hit ought to be negligible in the context of complete processing of a query. Per off-list report from Toshihide Katayama. Back-patch to all supported branches.
* Fix two typos, by Fujii Masao.Heikki Linnakangas2010-12-06
|
* Prevent inlining a SQL function with multiple OUT parameters.Tom Lane2010-12-01
| | | | | | | | | | | | | There were corner cases in which the planner would attempt to inline such a function, which would result in a failure at runtime due to loss of information about exactly what the result record type is. Fix by disabling inlining when the function's recorded result type is RECORD. There might be some sub-cases where inlining could still be allowed, but this is a simple and backpatchable fix, so leave refinements for another day. Per bug #5777 from Nate Carson. Back-patch to all supported branches. 8.1 happens to avoid a core-dump here, but it still does the wrong thing.
* Move call to GetTopTransactionId() earlier in LockAcquire(),Simon Riggs2010-11-29
| | | | | | | | | | removing an infrequently occurring race condition in Hot Standby. An xid must be assigned before a lock appears in shared memory, rather than immediately after, else GetRunningTransactionLocks() may see InvalidTransactionId, causing assertion failures during lock processing on standby. Bug report and diagnosis by Fujii Masao, fix by me.
* Fix leakage of cost_limit when multiple autovacuum workers are active.Tom Lane2010-11-19
| | | | | | | | | | | | | | | | When using default autovacuum_vac_cost_limit, autovac_balance_cost relied on VacuumCostLimit to contain the correct global value ... but after the first time through in a particular worker process, it didn't, because we'd trashed it in previous iterations. Depending on the state of other autovac workers, this could result in a steady reduction of the effective cost_limit setting as a particular worker processed more and more tables, causing it to go slower and slower. Spotted by Simon Poole (bug #5759). Fix by saving and restoring the GUC variables in the loop in do_autovacuum. In passing, improve a few comments. Back-patch to 8.3 ... the cost rebalancing code has been buggy since it was put in.
* Send paramHandle to subprocesses as 64-bit on Win64Magnus Hagander2010-11-16
| | | | | | | | | | | The handle to the shared memory segment containing startup parameters was sent as 32-bit even on 64-bit systems. Since HANDLEs appear to be allocated sequentially this shouldn't be a problem until we reach 2^32 open handles in the postmaster, but a 64-bit value should be sent across as 64-bit, and not zero out the top 32 bits. Noted by Tom Lane.
* The GiST scan algorithm uses LSNs to detect concurrent pages splits, butHeikki Linnakangas2010-11-16
| | | | | | | | | | | | | temporary indexes are not WAL-logged. We used a constant LSN for temporary indexes, on the assumption that we don't need to worry about concurrent page splits in temporary indexes because they're only visible to the current session. But that assumption is wrong, it's possible to insert rows and split pages in the same session, while a scan is in progress. For example, by opening a cursor and fetching some rows, and INSERTing new rows before fetching some more. Fix by generating fake increasing LSNs, used in place of real LSNs in temporary GiST indexes.
* Avoid spurious Hot Standby conflicts from btree delete records.Simon Riggs2010-11-15
| | | | | | | Similar conflicts were already avoided for related record types. Massive over-caution resulted in a usability bug. Clear theoretical basis for doing this is now confirmed by me. Request to remove from Heikki (twice), over-caution by me.
* Fix canAcceptConnections() bugs introduced by replication-related patches.Tom Lane2010-11-14
| | | | | | | | | | We must not return any "okay to proceed" result code without having checked for too many children, else we might fail later on when trying to add the new child to one of the per-child state arrays. It's not clear whether this oversight explains Stefan Kaltenbrunner's recent report, but it could certainly produce a similar symptom. Back-patch to 8.4; the logic was not broken before that.
* Add missing outfuncs.c support for struct InhRelation.Tom Lane2010-11-13
| | | | | | This is needed to support debug_print_parse, per report from Jon Nelson. Cursory testing via the regression tests suggests we aren't missing anything else.
* Fix old oversight in const-simplification of COALESCE() expressions.Tom Lane2010-11-12
| | | | | | | | | | | | | Once we have found a non-null constant argument, there is no need to examine additional arguments of the COALESCE. The previous coding got it right only if the constant was in the first argument position; otherwise it tried to simplify following arguments too, leading to unexpected behavior like this: regression=# select coalesce(f1, 42, 1/0) from int4_tbl; ERROR: division by zero It's a minor corner case, but a bug is a bug, so back-patch all the way.
* Add missing support for removing foreign data wrapper / server privilegesHeikki Linnakangas2010-11-12
| | | | | | | | belonging to a user at DROP OWNED BY. Foreign data wrappers and servers don't do anything useful yet, which is why no-one has noticed, but since we have them, seems prudent to fix this. Per report from Chetan Suttraway. Backpatch to 9.0, 8.4 has the same problem but this patch didn't apply there so I'm not going to bother.
* Fix bug introduced by the recent patch to check that the checkpoint redoHeikki Linnakangas2010-11-11
| | | | | | | location read from backup label file can be found: wasShutdown was set incorrectly when a backup label file was found. Jeff Davis, with a little tweaking by me.
* Fix line_construct_pm() for the case of "infinite" (DBL_MAX) slope.Tom Lane2010-11-10
| | | | | | | | | | | | | | This code was just plain wrong: what you got was not a line through the given point but a line almost indistinguishable from the Y-axis, although not truly vertical. The only caller that tries to use this function with m == DBL_MAX is dist_ps_internal for the case where the lseg is horizontal; it would end up producing the distance from the given point to the place where the lseg's line crosses the Y-axis. That function is used by other operators too, so there are several operators that could compute wrong distances from a line segment to something else. Per bug #5745 from jindiax. Back-patch to all supported branches.
* Repair memory leakage while ANALYZE-ing complex index expressions.Tom Lane2010-11-09
| | | | | | | | | | | | | | | | | The general design of memory management in Postgres is that intermediate results computed by an expression are not freed until the end of the tuple cycle. For expression indexes, ANALYZE has to re-evaluate each expression for each of its sample rows, and it wasn't bothering to free intermediate results until the end of processing of that index. This could lead to very substantial leakage if the intermediate results were large, as in a recent example from Jakub Ouhrabka. Fix by doing ResetExprContext for each sample row. This necessitates adding a datumCopy step to ensure that the final expression value isn't recycled too. Some quick testing suggests that this change adds at worst about 10% to the time needed to analyze a table with an expression index; which is annoying, but seems a tolerable price to pay to avoid unexpected out-of-memory problems. Back-patch to all supported branches.
* In rewriteheap.c (used by VACUUM FULL and CLUSTER), calculate the tupleHeikki Linnakangas2010-11-09
| | | | | | | | | | length stored in the line pointer the same way it's calculated in the normal heap_insert() codepath. As noted by Jeff Davis, the length stored by raw_heap_insert() included padding but the one stored by the normal codepath did not. While the mismatch seems to be harmless, inconsistency isn't good, and the normal codepath has received a lot more testing over the years. Backpatch to 8.3 where the heap rewrite code was introduced.
* Fix error handling in temp-file deletion with log_temp_files active.Tom Lane2010-11-08
| | | | | | | | | | | | | | | | | | | | The original coding in FileClose() reset the file-is-temp flag before unlinking the file, so that if control came back through due to an error, it wouldn't try to unlink the file twice. This was correct when written, but when the log_temp_files feature was added, the logging action was put in between those two steps. An error occurring during the logging action --- such as a query cancel --- would result in the unlink not getting done at all, as in recent report from Michael Glaesemann. To fix this, make sure that we do both the stat and the unlink before doing anything that could conceivably CHECK_FOR_INTERRUPTS. There is a judgment call here, which is which log message to emit first: if you can see only one, which should it be? I chose to log unlink failure at the risk of losing the log_temp_files log message --- after all, if the unlink does fail, the temp file is still there for you to see. Back-patch to all versions that have log_temp_files. The code was OK before that.
* Fix permanent memory leak in autovacuum launcherAlvaro Herrera2010-11-08
| | | | | | | | | | | | get_database_list was uselessly allocating its output data, along some created along the way, in a permanent memory context. This didn't matter when autovacuum was a single, short-lived process, but now that the launcher is permanent, it shows up as a permanent leak. To fix, make get_database list allocate its output data in the caller's context, which is in charge of freeing it when appropriate; and the memory leaked by heap_beginscan et al is allocated in a throwaway transaction context.
* Add support for detecting register-stack overrun on IA64.Tom Lane2010-11-06
| | | | | | | | Per recent investigation, the register stack can grow faster than the regular stack depending on compiler and choice of options. To avoid crashes we must check both stacks in check_stack_depth(). Back-patch to all supported versions.
* Fix adjust_semi_join to be more cautious about clauseless joins.Tom Lane2010-11-02
| | | | | | | It was reporting that these were fully indexed (hence cheap), when of course they're the exact opposite of that. I'm not certain if the case would arise in practice, since a clauseless semijoin is hard to produce in SQL, but if it did happen we'd make some dumb decisions.
* Ensure an index that uses a whole-row Var still depends on its table.Tom Lane2010-11-02
| | | | | | | | | | | | | | | | We failed to record any dependency on the underlying table for an index declared like "create index i on t (foo(t.*))". This would create trouble if the table were dropped without previously dropping the index. To fix, simplify some overly-cute code in index_create(), accepting the possibility that sometimes the whole-table dependency will be redundant. Also document this hazard in dependency.c. Per report from Kevin Grittner. In passing, prevent a core dump in pg_get_indexdef() if the index's table can't be found. I came across this while experimenting with Kevin's example. Not sure it's a real issue when the catalogs aren't corrupt, but might as well be cautious. Back-patch to all supported versions.
* Bootstrap WAL to begin at segment logid=0 logseg=1 (000000010000000000000001)Heikki Linnakangas2010-11-02
| | | | | | | | | | | | | rather than 0/0, so that we can safely use 0/0 as an invalid value. This is a more future-proof fix for the corner-case bug in streaming replication that was fixed yesterday. We had a similar corner-case bug with log/seg 0/0 back in February as well. Avoiding 0/0 as a valid value should prevent bugs like that in the future. Per Tom Lane's idea. Back-patch to 9.0. Since this only affects bootstrapping, it makes no difference to existing installations. We don't need to worry about the bug in existing installations, because if you've managed to get past the initial base backup already, you won't hit the bug in the future either.
* Fix corner-case bug in tracking of latest removed WAL segment duringHeikki Linnakangas2010-11-01
| | | | | | | | | streaming replication. We used log/seg 0/0 to indicate that no WAL segments have been removed since startup, but 0/0 is a valid value for the very first WAL segment after initdb. To make that disambiguous, store (latest removed WAL segment + 1) in the global variable. Per report from Matt Chesler, also reproduced by Greg Smith.
* Fix long-standing segfault when accept() or one of the calls made rightHeikki Linnakangas2010-10-27
| | | | | after accepting a connection fails, and the server is compiled with GSSAPI support. Report and patch by Alexander V. Chernikov, bug #5731.
* Before removing backup_label and irrevocably changing pg_control file, checkHeikki Linnakangas2010-10-26
| | | | | | | | that WAL file containing the checkpoint redo-location can be found. This avoids making the cluster irrecoverable if the redo location is in an earlie WAL file than the checkpoint record. Report, analysis and patch by Jeff Davis, with small changes by me.
* Fix inline_set_returning_function() to preserve the invalItems list properly.Tom Lane2010-10-25
| | | | | | | | | This avoids a possible crash when inlining a SRF whose argument list contains a reference to an inline-able user function. The crash is quite reproducible with CLOBBER_FREED_MEMORY enabled, but would be less certain in a production build. Problem introduced in 9.0 by the named-arguments patch, which requires invoking eval_const_expressions() before we can try to inline a SRF. Per report from Brendan Jurd.
* Add semicolon, missed in previous patch. And update the keyword list inHeikki Linnakangas2010-10-22
| | | | the docs to reflect that OFF is now unreserved. Spotted by Tom Lane.
* Make OFF keyword unreserved. It's not hard to imagine wanting to use 'off'Heikki Linnakangas2010-10-22
| | | | | | | | as a variable or column name, and it's not reserved in recent versions of the SQL spec either. This became particularly annoying in 9.0, before that PL/pgSQL replaced variable names in queries with parameter markers, so it was possible to use OFF and many other backend parser keywords as variable names. Because of that, backpatch to 9.0.
* Remove obsolete comment, per Josh Kupershmidt.Tom Lane2010-10-20
|
* Don't try to fetch database name when SetTransactionIdLimit() is executedTom Lane2010-10-20
| | | | | | | | | | | | outside a transaction. This repairs brain fade in my patch of 2009-08-30: the reason we had been storing oldest-database name, not OID, in ShmemVariableCache was of course to avoid having to do a catalog lookup at times when it might be unsafe. This error explains why Aleksandr Dushein is having trouble getting out of an XID wraparound state in bug #5718, though not how he got into that state in the first place. I suspect pg_upgrade is at fault there.
* Fix incorrect generation of whole-row variables in planner.Tom Lane2010-10-19
| | | | | | | | | | | | | | A couple of places in the planner need to generate whole-row Vars, and were cutting corners by setting vartype = RECORDOID in the Vars, even in cases where there's an identifiable named composite type for the RTE being referenced. While we mostly got away with this, it failed when there was also a parser-generated whole-row reference to the same RTE, because the two Vars weren't equal() due to the difference in vartype. Fix by providing a subroutine the planner can call to generate whole-row Vars the same way the parser does. Per bug #5716 from Andrew Tipton. Back-patch to 9.0 where one of the bogus calls was introduced (the other one is new in HEAD).
* Fix low-risk potential denial of service against RADIUS login.Magnus Hagander2010-10-15
| | | | | | | | | | | | | | | | Corrupt RADIUS responses were treated as errors and not ignored (which the RFC2865 states they should be). This meant that a user with unfiltered access to the network of the PostgreSQL or RADIUS server could send a spoofed RADIUS response to the PostgreSQL server causing it to reject a valid login, provided the attacker could also guess (or brute-force) the correct port number. Fix is to simply retry the receive in a loop until the timeout has expired or a valid (signed by the correct RADIUS server) packet arrives. Reported by Alan DeKok in bug #5687.
* Fix bug in comment of timeline history file.Simon Riggs2010-10-14
| | | | Fujii Masao
* Fix assorted bugs in GIN's WAL replay logic.Tom Lane2010-10-11
| | | | | | | | | | | | | | The original coding was quite sloppy about handling the case where XLogReadBuffer fails (because the page has since been deleted). This would result in either "bad buffer id: 0" or an Assert failure during replay, if indeed the page were no longer there. In a couple of places it also neglected to check whether the change had already been applied, which would probably result in corrupted index contents. I believe that bug #5703 is an instance of the first problem. These issues could show up without replication, but only if you were unfortunate enough to crash between modification of a GIN index and the next checkpoint. Back-patch to 8.2, which is as far back as GIN has WAL support.
* Behave correctly if INSERT ... VALUES is decorated with additional clauses.Tom Lane2010-10-02
| | | | | | | | | | In versions 8.2 and up, the grammar allows attaching ORDER BY, LIMIT, FOR UPDATE, or WITH to VALUES, and hence to INSERT ... VALUES. But the special-case code for VALUES in transformInsertStmt() wasn't expecting any of those, and just ignored them, leading to unexpected results. Rather than complicate the special-case path, just ensure that the presence of any of those clauses makes us treat the query as if it had a general SELECT. Per report from Hitoshi Harada.
* Throw an appropriate error if ALTER COLUMN TYPE finds a dependent trigger.Tom Lane2010-10-02
| | | | | | | | | Actually making this case work, if the column is used in the trigger's WHEN condition, will take some new code that probably isn't appropriate to back-patch. For now, just throw a FEATURE_NOT_SUPPORTED error rather than allowing control to reach the "unexpected object" case. Per bug #5688 from Daniel Grace. Back-patch to 9.0 where the possibility of such a dependency was introduced.
* Translation updates for 9.0.1Peter Eisentraut2010-09-30
|
* Fix PlaceHolderVar mechanism's interaction with outer joins.Tom Lane2010-09-28
| | | | | | | | | | | | | | The point of a PlaceHolderVar is to allow a non-strict expression to be evaluated below an outer join, after which its value bubbles up like a Var and can be forced to NULL when the outer join's semantics require that. However, there was a serious design oversight in that, namely that we didn't ensure that there was actually a correct place in the plan tree to evaluate the placeholder :-(. It may be necessary to delay evaluation of an outer join to ensure that a placeholder that should be evaluated below the join can be evaluated there. Per recent bug report from Kirill Simonov. Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced.
* Add "(change requires restart)" note to some postgresql.conf parameters.Robert Haas2010-09-27
| | | | Devrim GÜNDÜZ