aboutsummaryrefslogtreecommitdiff
path: root/src/include/commands
Commit message (Collapse)AuthorAge
...
* revert: Generalize relation analyze in table AM interfaceAlexander Korotkov2024-04-16
| | | | | | This commit reverts 27bc1772fc and dd1f6b0c17. Per review by Andres Freund. Discussion: https://postgr.es/m/20240415201057.khoyxbwwxfgzomeo%40awork3.anarazel.de
* Revert: Implement pg_wal_replay_wait() stored procedureAlexander Korotkov2024-04-11
| | | | | | | This commit reverts 06c418e163, e37662f221, bf1e650806, 25f42429e2, ee79928441, and 74eaf66f98 per review by Heikki Linnakangas. Discussion: https://postgr.es/m/b155606b-e744-4218-bda5-29379779da1a%40iki.fi
* Revert: Allow locking updated tuples in tuple_update() and tuple_delete()Alexander Korotkov2024-04-11
| | | | | | This commit reverts 87985cc925 and 818861eb57 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Provide a way block-level table AMs could re-use acquire_sample_rows()Alexander Korotkov2024-04-08
| | | | | | | | | | While keeping API the same, this commit provides a way for block-level table AMs to re-use existing acquire_sample_rows() by providing custom callbacks for getting the next block and the next tuple. Reported-by: Andres Freund Discussion: https://postgr.es/m/20240407214001.jgpg5q3yv33ve6y3%40awork3.anarazel.de Reviewed-by: Pavel Borisov
* Clarify what is protected by WaitLSNLockAlexander Korotkov2024-04-07
| | | | | | | | | Not just WaitLSNState.waitersHeap, but also WaitLSNState.procInfos and updating of WaitLSNState.minWaitedLSN is protected by WaitLSNLock. There is one now documented exclusion on fast-path checking of WaitLSNProcInfo.inHeap flag. Discussion: https://postgr.es/m/202404030658.hhj3vfxeyhft%40alvherre.pgsql
* Use an LWLock instead of a spinlock in waitlsn.cAlexander Korotkov2024-04-07
| | | | | | | This should prevent busy-waiting when number of waiting processes is high. Discussion: https://postgr.es/m/202404030658.hhj3vfxeyhft%40alvherre.pgsql Author: Alvaro Herrera
* Invent SERIALIZE option for EXPLAIN.Tom Lane2024-04-03
| | | | | | | | | | | | | | | | EXPLAIN (ANALYZE, SERIALIZE) allows collection of statistics about the volume of data emitted by a query, as well as the time taken to convert the data to the on-the-wire format. Previously there was no way to investigate this without actually sending the data to the client, in which case network transmission costs might swamp what you wanted to see. In particular this feature allows investigating the costs of de-TOASTing compressed or out-of-line data during formatting. Stepan Rutz and Matthias van de Meent, reviewed by Tomas Vondra and myself Discussion: https://postgr.es/m/ca0adb0e-fa4e-c37e-1cd7-91170b18cae1@gmx.de
* Use the pairing heap instead of a flat array for LSN replay waitersAlexander Korotkov2024-04-03
| | | | | | | | | | | | | | 06c418e163 introduced pg_wal_replay_wait() procedure allowing to wait for the particular LSN to be replayed on standby. The waiters were stored in the flat array. Even though scanning small arrays is fast, that might be a problem at scale (a lot of waiting processes). This commit replaces the flat shared memory array with the pairing heap, which holds the waiter with the least LSN at the top. This gives us O(log N) complexity for both inserting and removing waiters. Reported-by: Alvaro Herrera Discussion: https://postgr.es/m/202404030658.hhj3vfxeyhft%40alvherre.pgsql
* Implement pg_wal_replay_wait() stored procedureAlexander Korotkov2024-04-02
| | | | | | | | | | | | | | | | | | | | | | | | pg_wal_replay_wait() is to be used on standby and specifies waiting for the specific WAL location to be replayed before starting the transaction. This option is useful when the user makes some data changes on primary and needs a guarantee to see these changes on standby. The queue of waiters is stored in the shared memory array sorted by LSN. During replay of WAL waiters whose LSNs are already replayed are deleted from the shared memory array and woken up by setting of their latches. pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise, the snapshot could prevent the replay of WAL records implying a kind of self-deadlock. This is why it is only possible to implement pg_wal_replay_wait() as a procedure working in a non-atomic context, not a function. Catversion is bumped. Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru Author: Kartyshov Ivan, Alexander Korotkov Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
* Use TidStore for dead tuple TIDs storage during lazy vacuum.Masahiko Sawada2024-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we used a simple array for storing dead tuple IDs during lazy vacuum, which had a number of problems: * The array used a single allocation and so was limited to 1GB. * The allocation was pessimistically sized according to table size. * Lookup with binary search was slow because of poor CPU cache and branch prediction behavior. This commit replaces that array with the TID store from commit 30e144287a. Since the backing radix tree makes small allocations as needed, the 1GB limit is now gone. Further, the total memory used is now often smaller by an order of magnitude or more, depending on the distribution of blocks and offsets. These two features should make multiple rounds of heap scanning and index cleanup an extremely rare event. TID lookup during index cleanup is also several times faster, even more so when index order is correlated with heap tuple order. Since there is no longer a predictable relationship between the number of dead tuples vacuumed and the space taken up by their TIDs, the number of tuples no longer provides any meaningful insights for users, nor is the maximum number predictable. For that reason this commit also changes to byte-based progress reporting, with the relevant columns of pg_stat_progress_vacuum renamed accordingly to max_dead_tuple_bytes and dead_tuple_bytes. For parallel vacuum, both the TID store and supplemental information specific to vacuum are shared among the parallel vacuum workers. As with the previous array, we don't take any locks on TidStore during parallel vacuum since writes are still only done by the leader process. Bump catalog version. Reviewed-by: John Naylor, (in an earlier version) Dilip Kumar Discussion: https://postgr.es/m/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com
* Add new COPY option LOG_VERBOSITY.Masahiko Sawada2024-04-01
| | | | | | | | | | | | | | | | This commit adds a new COPY option LOG_VERBOSITY, which controls the amount of messages emitted during processing. Valid values are 'default' and 'verbose'. This is currently used in COPY FROM when ON_ERROR option is set to ignore. If 'verbose' is specified, a NOTICE message is emitted for each discarded row, providing additional information such as line number, column name, and the malformed value. This helps users to identify problematic rows that failed to load. Author: Bharath Rupireddy Reviewed-by: Michael Paquier, Atsushi Torikoshi, Masahiko Sawada Discussion: https://www.postgresql.org/message-id/CALj2ACUk700cYhx1ATRQyRw-fBM%2BaRo6auRAitKGff7XNmYfqQ%40mail.gmail.com
* Generalize relation analyze in table AM interfaceAlexander Korotkov2024-03-30
| | | | | | | | | | | | | | | Currently, there is just one algorithm for sampling tuples from a table written in acquire_sample_rows(). Custom table AM can just redefine the way to get the next block/tuple by implementing scan_analyze_next_block() and scan_analyze_next_tuple() API functions. This approach doesn't seem general enough. For instance, it's unclear how to sample this way index-organized tables. This commit allows table AM to encapsulate the whole sampling algorithm (currently implemented in acquire_sample_rows()) into the relation_analyze() API function. Discussion: https://postgr.es/m/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com Reviewed-by: Pavel Borisov, Matthias van de Meent
* Remove ObjectClass typePeter Eisentraut2024-03-26
| | | | | | | | | | | | | | | | | | | | | | ObjectClass is an enum whose values correspond to catalog OIDs. But the extra layer of redirection, which is used only in small parts of the code, and the similarity to ObjectType, are confusing and cumbersome. One advantage has been that some switches processing the OCLASS enum don't have "default:" cases. This is so that the compiler tells us when we fail to add support for some new object class. But you can also handle that with some assertions and proper test coverage. It's not even clear how strong this benefit is. For example, in AlterObjectNamespace_oid(), you could still put a new OCLASS into the "ignore object types that don't have schema-qualified names" case, and it might or might not be wrong. Also, there are already various OCLASS switches that do have a default case, so it's not even clear what the preferred coding style should be. Reviewed-by: jian he <jian.universality@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/CAGECzQT3caUbcCcszNewCCmMbCuyP7XNAm60J3ybd6PN5kH2Dw%40mail.gmail.com
* Allow locking updated tuples in tuple_update() and tuple_delete()Alexander Korotkov2024-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, in read committed transaction isolation mode (default), we have the following sequence of actions when tuple_update()/tuple_delete() finds the tuple updated by the concurrent transaction. 1. Attempt to update/delete tuple with tuple_update()/tuple_delete(), which returns TM_Updated. 2. Lock tuple with tuple_lock(). 3. Re-evaluate plan qual (recheck if we still need to update/delete and calculate the new tuple for update). 4. Second attempt to update/delete tuple with tuple_update()/tuple_delete(). This attempt should be successful, since the tuple was previously locked. This commit eliminates step 2 by taking the lock during the first tuple_update()/tuple_delete() call. The heap table access method saves some effort by checking the updated tuple once instead of twice. Future undo-based table access methods, which will start from the latest row version, can immediately place a lock there. Also, this commit makes tuple_update()/tuple_delete() optionally save the old tuple into the dedicated slot. That saves efforts on re-fetching tuples in certain cases. The code in nodeModifyTable.c is simplified by removing the nested switch/case. Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp Reviewed-by: Andres Freund, Chris Travers
* Add temporal FOREIGN KEY contraintsPeter Eisentraut2024-03-24
| | | | | | | | | | | | | | | | | | Add PERIOD clause to foreign key constraint definitions. This is supported for range and multirange types. Temporal foreign keys check for range containment instead of equality. This feature matches the behavior of the SQL standard temporal foreign keys, but it works on PostgreSQL's native ranges instead of SQL's "periods", which don't exist in PostgreSQL (yet). Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET DEFAULT} are not supported yet. Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
* Reintroduce MAINTAIN privilege and pg_maintain predefined role.Nathan Bossart2024-03-13
| | | | | | | | | | | | | | | | | | Roles with MAINTAIN on a relation may run VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZE VIEW, CLUSTER, and LOCK TABLE on the relation. Roles with privileges of pg_maintain may run those same commands on all relations. This was previously committed for v16, but it was reverted in commit 151c22deee due to concerns about search_path tricks that could be used to escalate privileges to the table owner. Commits 2af07e2f74, 59825d1639, and c7ea3f4229 resolved these concerns by restricting search_path when running maintenance commands. Bumps catversion. Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/20240305161235.GA3478007%40nathanxps13
* Improve support for ExplainOneQuery() hookMichael Paquier2024-03-11
| | | | | | | | | | | | | | | | | | | | There is a hook called ExplainOneQuery_hook that gives modules the possibility to plug into this code path, but, like utility.c for utility statement execution, there is no corresponding "standard" routine in the case of EXPLAIN executed for one Query. This commit adds a new standard_ExplainOneQuery() in explain.c, which is able to run explain on a non-utility Query without calling its hook. Per the feedback received from a couple of hackers, this change gives the possibility to cut a few hundred lines of code in some of the popular out-of-core modules as these maintained a copy of ExplainOneQuery(), adding custom extra information at the beginning or the end of the EXPLAIN output. Author: Mats Kindahl Reviewed-by: Aleksander Alekseev, Jelte Fennema-Nio, Andrei Lepikhov Discussion: https://postgr.es/m/CA+14427V_B4EAoC_o-iYYucRdMSOTfpuH9k-QbexffY1HYJBiA@mail.gmail.com
* Improve performance of subsystems on top of SLRUAlvaro Herrera2024-02-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | More precisely, what we do here is make the SLRU cache sizes configurable with new GUCs, so that sites with high concurrency and big ranges of transactions in flight (resp. multixacts/subtransactions) can benefit from bigger caches. In order for this to work with good performance, two additional changes are made: 1. the cache is divided in "banks" (to borrow terminology from CPU caches), and algorithms such as eviction buffer search only affect one specific bank. This forestalls the problem that linear searching for a specific buffer across the whole cache takes too long: we only have to search the specific bank, whose size is small. This work is authored by Andrey Borodin. 2. Change the locking regime for the SLRU banks, so that each bank uses a separate LWLock. This allows for increased scalability. This work is authored by Dilip Kumar. (A part of this was previously committed as d172b717c6f4.) Special care is taken so that the algorithms that can potentially traverse more than one bank release one bank's lock before acquiring the next. This should happen rarely, but particularly clog.c's group commit feature needed code adjustment to cope with this. I (Álvaro) also added lots of comments to make sure the design is sound. The new GUCs match the names introduced by bcdfa5f2e2f2 in the pg_stat_slru view. The default values for these parameters are similar to the previous sizes of each SLRU. commit_ts, clog and subtrans accept value 0, which means to adjust by dividing shared_buffers by 512 (so 2MB for every 1GB of shared_buffers), with a cap of 8MB. (A new slru.c function SimpleLruAutotuneBuffers() was added to support this.) The cap was previously 1MB for clog, so for sites with more than 512MB of shared memory the total memory used increases, which is likely a good tradeoff. However, other SLRUs (notably multixact ones) retain smaller sizes and don't support a configured value of 0. These values based on shared_buffers may need to be revisited, but that's an easy change. There was some resistance to adding these new GUCs: it would be better to adjust to memory pressure automatically somehow, for example by stealing memory from shared_buffers (where the caches can grow and shrink naturally). However, doing that seems to be a much larger project and one which has made virtually no progress in several years, and because this is such a pain point for so many users, here we take the pragmatic approach. Author: Andrey Borodin <x4mmm@yandex-team.ru> Author: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Amul Sul, Gilles Darold, Anastasia Lubennikova, Ivan Lazarev, Robert Haas, Thomas Munro, Tomas Vondra, Yura Sokolov, Васильев Дмитрий (Dmitry Vasiliev). Discussion: https://postgr.es/m/2BEC2B3F-9B61-4C1D-9FB5-5FAB0F05EF86@yandex-team.ru Discussion: https://postgr.es/m/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com
* Revert "Refactor CopyReadAttributes{CSV,Text}() to use a callback in COPY FROM"Michael Paquier2024-02-14
| | | | | | | | | | | | | | This reverts commit 95fb5b49024, for reasons similar to what led to 1aa8324b81fa. In this case, the callback was called once per row, which is less worse than the previous callback introduced for COPY TO called once per argument for each row, still the patch set discussed to plug in custom routines to the COPY paths would be able to know which subroutine to use depending on its CopyFromState, so this led to a suboptimal approach at the end. For now, this part is reverted to consider better which approach to use. Discussion: https://postgr.es/m/20240206014125.qofww7ew3dx3v3uk@awork3.anarazel.de
* Refactor CopyReadAttributes{CSV,Text}() to use a callback in COPY FROMMichael Paquier2024-02-05
| | | | | | | | | | | | | CopyReadAttributes{CSV,Text}() are used to parse lines for text and CSV format. This reduces the number of "if" branches that need to be checked when parsing fields in CSV and text mode when dealing with a COPY FROM, something that can become more noticeable with more attributes and more lines to process. Extracted from a larger patch by the same author. Author: Sutou Kouhei Discussion: https://postgr.es/m/20231204.153548.2126325458835528809.kou@clear-code.com
* Add EXPLAIN (MEMORY) to report planner memory consumptionAlvaro Herrera2024-01-29
| | | | | | | | | | | | | | | | | | | | This adds a new "Memory:" line under the "Planning:" group (which currently only has "Buffers:") when the MEMORY option is specified. In order to make the reporting reasonably accurate, we create a separate memory context for planner activities, to be used only when this option is given. The total amount of memory allocated by that context is reported as "allocated"; we subtract memory in the context's freelists from that and report that result as "used". We use MemoryContextStatsInternal() to obtain the quantities. The code structure to show buffer usage during planning was not in amazing shape, so I (Álvaro) modified the patch a bit to clean that up in passing. Author: Ashutosh Bapat Reviewed-by: David Rowley, Andrey Lepikhov, Jian He, Andy Fan Discussion: https://www.postgresql.org/message-id/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com
* Add progress reporting of skipped tuples during COPY FROM.Masahiko Sawada2024-01-25
| | | | | | | | | | | | | | | | 9e2d870119 enabled the COPY command to skip malformed data, however there was no visibility into how many tuples were actually skipped during the COPY FROM. This commit adds a new "tuples_skipped" column to pg_stat_progress_copy view to report the number of tuples that were skipped because they contain malformed data. Bump catalog version. Author: Atsushi Torikoshi Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/d12fd8c99adcae2744212cb23feff6ed%40oss.nttdata.com
* Add temporal PRIMARY KEY and UNIQUE constraintsPeter Eisentraut2024-01-24
| | | | | | | | | | | | Add WITHOUT OVERLAPS clause to PRIMARY KEY and UNIQUE constraints. These are backed by GiST indexes instead of B-tree indexes, since they are essentially exclusion constraints with = for the scalar parts of the key and && for the temporal part. Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
* Rename COPY option from SAVE_ERROR_TO to ON_ERRORAlexander Korotkov2024-01-19
| | | | | | | | | The option names now are "stop" (default) and "ignore". The future options could be "file 'filename.log'" and "table 'tablename'". Discussion: https://postgr.es/m/20240117.164859.2242646601795501168.horikyota.ntt%40gmail.com Author: Jian He Reviewed-by: Atsushi Torikoshi
* Add new COPY option SAVE_ERROR_TOAlexander Korotkov2024-01-16
| | | | | | | | | | | | | | | | | | | | | Currently, when source data contains unexpected data regarding data type or range, the entire COPY fails. However, in some cases, such data can be ignored and just copying normal data is preferable. This commit adds a new option SAVE_ERROR_TO, which specifies where to save the error information. When this option is specified, COPY skips soft errors and continues copying. Currently, SAVE_ERROR_TO only supports "none". This indicates error information is not saved and COPY just skips the unexpected data and continues running. Later works are expected to add more choices, such as 'log' and 'table'. Author: Damir Belyalov, Atsushi Torikoshi, Alex Shulgin, Jian He Discussion: https://postgr.es/m/87k31ftoe0.fsf_-_%40commandprompt.com Reviewed-by: Pavel Stehule, Andres Freund, Tom Lane, Daniel Gustafsson, Reviewed-by: Alena Rybakina, Andy Fan, Andrei Lepikhov, Masahiko Sawada Reviewed-by: Vignesh C, Atsushi Torikoshi
* Make attstattarget nullablePeter Eisentraut2024-01-13
| | | | | | | | | | | | | | | | | | | | | | | | This changes the pg_attribute field attstattarget into a nullable field in the variable-length part of the row. If no value is set by the user for attstattarget, it is now null instead of previously -1. This saves space in pg_attribute and tuple descriptors for most practical scenarios. (ATTRIBUTE_FIXED_PART_SIZE is reduced from 108 to 104.) Also, null is the semantically more correct value. The ANALYZE code internally continues to represent the default statistics target by -1, so that that code can avoid having to deal with null values. But that is now contained to the ANALYZE code. Only the DDL code deals with attstattarget possibly null. For system columns, the field is now always null. The ANALYZE code skips system columns anyway. To set a column's statistics target to the default value, the new command form ALTER TABLE ... SET STATISTICS DEFAULT can be used. (SET STATISTICS -1 still works.) Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Fix bugs in manipulation of large objects.Tom Lane2023-12-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In v16 and up (since commit afbfc0298), large object ownership checking has been broken because object_ownercheck() didn't take care of the discrepancy between our object-address representation of large objects (classId == LargeObjectRelationId) and the catalog where their ownership info is actually stored (LargeObjectMetadataRelationId). This resulted in failures such as "unrecognized class ID: 2613" when trying to update blob properties as a non-superuser. Poking around for related bugs, I found that AlterObjectOwner_internal would pass the wrong classId to the PostAlterHook in the no-op code path where the large object already has the desired owner. Also, recordExtObjInitPriv checked for the wrong classId; that bug is only latent because the stanza is dead code anyway, but as long as we're carrying it around it should be less wrong. These bugs are quite old. In HEAD, we can reduce the scope for future bugs of this ilk by changing AlterObjectOwner_internal's API to let the translation happen inside that function, rather than requiring callers to know about it. A more bulletproof fix, perhaps, would be to start using LargeObjectMetadataRelationId as the dependency and object-address classId for blobs. However that has substantial risk of breaking third-party code; even within our own code, it'd create hassles for pg_dump which would have to cope with a version-dependent representation. For now, keep the status quo. Discussion: https://postgr.es/m/2650449.1702497209@sss.pgh.pa.us
* Use larger segment file names for pg_notifyAlexander Korotkov2023-11-29
| | | | | | | | | | | | | | | | This avoids the wraparound in async.c and removes the corresponding code complexity. The maximum amount of allocated SLRU pages for NOTIFY / LISTEN queue is now determined by the max_notify_queue_pages GUC. The default value is 1048576. It allows to consume up to 8 GB of disk space which is exactly the limit we had previously. Author: Maxim Orlov, Aleksander Alekseev, Alexander Korotkov, Teodor Sigaev Author: Nikita Glukhov, Pavel Borisov, Yura Sokolov Reviewed-by: Jacob Champion, Heikki Linnakangas, Alexander Korotkov Reviewed-by: Japin Li, Pavel Borisov, Tom Lane, Peter Eisentraut, Andres Freund Reviewed-by: Andrey Borodin, Dilip Kumar, Aleksander Alekseev Discussion: https://postgr.es/m/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com Discussion: https://postgr.es/m/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com
* Add trailing commas to enum definitionsPeter Eisentraut2023-10-26
| | | | | | | | | | | | | | | | | | | | Since C99, there can be a trailing comma after the last value in an enum definition. A lot of new code has been introducing this style on the fly. Some new patches are now taking an inconsistent approach to this. Some add the last comma on the fly if they add a new last value, some are trying to preserve the existing style in each place, some are even dropping the last comma if there was one. We could nudge this all in a consistent direction if we just add the trailing commas everywhere once. I omitted a few places where there was a fixed "last" value that will always stay last. I also skipped the header files of libpq and ecpg, in case people want to use those with older compilers. There were also a small number of cases where the enum type wasn't used anywhere (but the enum values were), which ended up confusing pgindent a bit, so I left those alone. Discussion: https://www.postgresql.org/message-id/flat/386f8c45-c8ac-4681-8add-e3b0852c1620%40eisentraut.org
* Add support event triggers on authenticated loginAlexander Korotkov2023-10-16
| | | | | | | | | | | | | | | | | | | | | This commit introduces trigger on login event, allowing to fire some actions right on the user connection. This can be useful for logging or connection check purposes as well as for some personalization of environment. Usage details are described in the documentation included, but shortly usage is the same as for other triggers: create function returning event_trigger and then create event trigger on login event. In order to prevent the connection time overhead when there are no triggers the commit introduces pg_database.dathasloginevt flag, which indicates database has active login triggers. This flag is set by CREATE/ALTER EVENT TRIGGER command, and unset at connection time when no active triggers found. Author: Konstantin Knizhnik, Mikhail Gribkov Discussion: https://postgr.es/m/0d46d29f-4558-3af9-9c85-7774e14a7709%40postgrespro.ru Reviewed-by: Pavel Stehule, Takayuki Tsunakawa, Greg Nancarrow, Ivan Panchenko Reviewed-by: Daniel Gustafsson, Teodor Sigaev, Robert Haas, Andres Freund Reviewed-by: Tom Lane, Andrey Sokolov, Zhihong Yu, Sergey Shinderuk Reviewed-by: Gregory Stark, Nikita Malakhov, Ted Yu
* Move BuildDescForRelation() from tupdesc.c to tablecmds.cPeter Eisentraut2023-10-05
| | | | | | | | | | | | | | BuildDescForRelation() main job is to convert ColumnDef lists to pg_attribute/tuple descriptor arrays, which is really mostly an internal subroutine of DefineRelation() and some related functions, which is more the remit of tablecmds.c and doesn't have much to do with the basic tuple descriptor interfaces in tupdesc.c. This is also supported by observing the header includes we can remove in tupdesc.c. By moving it over, we can also (in the future) make BuildDescForRelation() use more internals of tablecmds.c that are not sensible to be exposed in tupdesc.c. Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
* Provide FORCE_NULL * and FORCE_NOT_NULL * options for COPY FROMAndrew Dunstan2023-09-30
| | | | | | | | | | | | These options already exist, but you need to specify a column list for them, which can be cumbersome. We already have the possibility of all columns for FORCE QUOTE, so this is simply extending that facility to FORCE_NULL and FORCE_NOT_NULL. Author: Zhang Mingli Reviewed-By: Richard Guo, Kyatoro Horiguchi, Michael Paquier. Discussion: https://postgr.es/m/CACJufxEnVqzOFtqhexF2+AwOKFrV8zHOY3y=p+gPK6eB14pn_w@mail.gmail.com
* Add GUC for temporarily disabling event triggersDaniel Gustafsson2023-09-25
| | | | | | | | | | | | | | | | | | | In order to troubleshoot misbehaving or buggy event triggers, the documented advice is to enter single-user mode. In an attempt to reduce the number of situations where single-user mode is required (or even recommended) for non-extraordinary maintenance, this GUC allows to temporarily suspend event triggers. This was originally extracted from a larger patchset which aimed at supporting event triggers on login events. Reviewed-by: Ted Yu <yuzhihong@gmail.com> Reviewed-by: Mikhail Gribkov <youzhick@gmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Michael Paquier <michael@paquier.xyz Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/9140106E-F9BF-4D85-8FC8-F2D3C094A6D9@yesql.se Discussion: https://postgr.es/m/0d46d29f-4558-3af9-9c85-7774e14a7709@postgrespro.ru
* Rename some function arguments for better clarityPeter Eisentraut2023-08-23
| | | | | | Especially make sure that array arguments have plural names. Discussion: https://www.postgresql.org/message-id/flat/5ed89c69-f4e6-5dab-4003-63bde7460e5e%40eisentraut.org
* Add const decorationsPeter Eisentraut2023-08-23
| | | | | | | | in index.c and indexcmds.c and some adjacent places. This especially makes it easier to understand for some complicated function signatures which are the input and the output arguments. Discussion: https://www.postgresql.org/message-id/flat/5ed89c69-f4e6-5dab-4003-63bde7460e5e%40eisentraut.org
* Report index vacuum progress.Masahiko Sawada2023-07-11
| | | | | | | | | | | | | | | | | This commit adds two columns: indexes_total and indexes_processed, to pg_stat_progress_vacuum system view to show the index vacuum progress. These numbers are reported in the "vacuuming indexes" and "cleaning up indexes" phases. This uses the new parallel message type for progress reporting added by be06506e7. Bump catversion because this changes the definition of pg_stat_progress_vacuum. Author: Sami Imseih Reviewed by: Masahiko Sawada, Michael Paquier, Nathan Bossart, Andres Freund Discussion: https://www.postgresql.org/message-id/flat/5478DFCD-2333-401A-B2F0-0D186AB09228@amazon.com
* Revert MAINTAIN privilege and pg_maintain predefined role.Nathan Bossart2023-07-07
| | | | | | | | | | | | | | | | This reverts the following commits: 4dbdb82513, c2122aae63, 5b1a879943, 9e1e9d6560, ff9618e82a, 60684dd834, 4441fc704d, and b5d6382496. A role with the MAINTAIN privilege may be able to use search_path tricks to escalate privileges to the table owner. Unfortunately, it is too late in the v16 development cycle to apply the proposed fix, i.e., restricting search_path when running maintenance commands. Bumps catversion. Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/E1q7j7Y-000z1H-Hr%40gemulon.postgresql.org Backpatch-through: 16
* Take pg_attribute out of VacAttrStatsPeter Eisentraut2023-07-03
| | | | | | | | | | | | | The VacAttrStats structure contained the whole Form_pg_attribute for a column, but it actually only needs attstattarget from there. So remove the Form_pg_attribute field and make a separate field for attstattarget. This simplifies some code for extended statistics that doesn't deal with a column but an expression, which had to fake up pg_attribute rows to satisfy internal APIs. Also, we can remove some comments that essentially said "don't look at pg_attribute directly". Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/d6069765-5971-04d3-c10d-e4f7b2e9c459%40eisentraut.org
* Add macro for maximum statistics targetPeter Eisentraut2023-07-03
| | | | | | | | The number of places where 10000 was hardcoded had grown a bit beyond the comfort level. Introduce a macro MAX_STATISTICS_TARGET instead. Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/d6069765-5971-04d3-c10d-e4f7b2e9c459%40eisentraut.org
* Fix cache lookup hazards introduced by ff9618e82a.Nathan Bossart2023-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | ff9618e82a introduced has_partition_ancestor_privs(), which is used to check whether a user has MAINTAIN on any partition ancestors. This involves syscache lookups, and presently this function does not take any relation locks, so it is likely subject to the same kind of cache lookup failures that were fixed by 19de0ab23c. To fix this problem, this commit partially reverts ff9618e82a. Specifically, it removes the partition-related changes, including the has_partition_ancestor_privs() function mentioned above. This means that MAINTAIN on a partitioned table is no longer sufficient to perform maintenance commands on its partitions. This is more like how privileges for maintenance commands work on supported versions. Privileges are checked for each partition, so a command that flows down to all partitions might refuse to process them (e.g., if the current user doesn't have MAINTAIN on the partition). In passing, adjust a few related comments and error messages, and add a test for the privilege checks for CLUSTER on a partitioned table. Reviewed-by: Michael Paquier, Jeff Davis Discussion: https://postgr.es/m/20230613211246.GA219055%40nathanxps13
* Move bool parameter for vacuum_rel() to option bits.Nathan Bossart2023-06-20
| | | | | | | | | | | ff9618e82a introduced the skip_privs parameter, which is used to skip privilege checks when recursing to a relation's TOAST table. This parameter should have been added as a flag bit in VacuumParams->options instead. Suggested-by: Michael Paquier Reviewed-by: Michael Paquier, Jeff Davis Discussion: https://postgr.es/m/ZIj4v1CwqlDVJZfB%40paquier.xyz
* Revert "Catalog NOT NULL constraints" and falloutAlvaro Herrera2023-04-12
| | | | | | | | | | | This reverts commit e056c557aef4 and minor later fixes thereof. There's a few problems in this new feature -- most notably regarding pg_upgrade behavior, but others as well. This new feature is not in any way critical on its own, so instead of scrambling to fix it we revert it and try again in early 17 with these issues in mind. Discussion: https://postgr.es/m/3801207.1681057430@sss.pgh.pa.us
* Catalog NOT NULL constraintsAlvaro Herrera2023-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now create pg_constaint rows for NOT NULL constraints with contype='n'. We propagate these constraints during operations such as adding inheritance relationships, creating and attaching partitions, creating tables LIKE other tables. We mostly follow the well-known rules of conislocal and coninhcount that we have for CHECK constraints, with some adaptations; for example, as opposed to CHECK constraints, we don't match NOT NULL ones by name when descending a hierarchy to alter it; instead we match by column number. This means we don't require the constraint names to be identical across a hierarchy. For now, we omit them from system catalogs. Maybe this is worth reconsidering. We don't support NOT VALID nor DEFERRABLE clauses either; these can be added as separate features later (this patch is already large and complicated enough.) This has been very long in the making. The first patch was written by Bernd Helmle in 2010 to add a new pg_constraint.contype value ('n'), which I (Álvaro) then hijacked in 2011 and 2012, until that one was killed by the realization that we ought to use contype='c' instead: manufactured CHECK constraints. However, later SQL standard development, as well as nonobvious emergent properties of that design (mostly, failure to distinguish them from "normal" CHECK constraints as well as the performance implication of having to test the CHECK expression) led us to reconsider this choice, so now the current implementation uses contype='n' again. In 2016 Vitaly Burovoy also worked on this feature[1] but found no consensus for his proposed approach, which was claimed to be closer to the letter of the standard, requiring additional pg_attribute columns to track the OID of the NOT NULL constraint for that column. [1] https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Author: Bernd Helmle <mailings@oopsware.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/CACA0E642A0267EDA387AF2B%40%5B172.26.14.62%5D Discussion: https://postgr.es/m/AANLkTinLXMOEMz+0J29tf1POokKi4XDkWJ6-DDR9BKgU@mail.gmail.com Discussion: https://postgr.es/m/20110707213401.GA27098@alvh.no-ip.org Discussion: https://postgr.es/m/1343682669-sup-2532@alvh.no-ip.org Discussion: https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com Discussion: https://postgr.es/m/20220817181249.q7qvj3okywctra3c@alvherre.pgsql
* Add --buffer-usage-limit option to vacuumdbDavid Rowley2023-04-07
| | | | | | | | | | | | | | | | | | | | | 1cbbee033 added BUFFER_USAGE_LIMIT to the VACUUM and ANALYZE commands, so here we permit that option to be specified in vacuumdb. In passing, adjust the documents for vacuum_buffer_usage_limit and the BUFFER_USAGE_LIMIT VACUUM option to mention "kB" rather than "KB". Do the same for the ERROR message in ExecVacuum() and check_vacuum_buffer_usage_limit(). Without that we might tell a user that the valid minimum value is 128 KB only to reject that because we accept only "kB" and not "KB". Also, add a small reminder comment in vacuum.h to try to trigger the memory of anyone adding new fields to VacuumParams that they might want to consider if vacuumdb needs to grow a new option too. Author: Melanie Plageman Reviewed-by: Justin Pryzby Reviewed-by: David Rowley Discussion: https://postgr.es/m/ZAzTg3iEnubscvbf@telsasoft.com
* Refresh cost-based delay params more frequently in autovacuumDaniel Gustafsson2023-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow autovacuum to reload the config file more often so that cost-based delay parameters can take effect while VACUUMing a relation. Previously, autovacuum workers only reloaded the config file once per relation vacuumed, so config changes could not take effect until beginning to vacuum the next table. Now, check if a reload is pending roughly once per block, when checking if we need to delay. In order for autovacuum workers to safely update their own cost delay and cost limit parameters without impacting performance, we had to rethink when and how these values were accessed. Previously, an autovacuum worker's wi_cost_limit was set only at the beginning of vacuuming a table, after reloading the config file. Therefore, at the time that autovac_balance_cost() was called, workers vacuuming tables with no cost-related storage parameters could still have different values for their wi_cost_limit_base and wi_cost_delay. Now that the cost parameters can be updated while vacuuming a table, workers will (within some margin of error) have no reason to have different values for cost limit and cost delay (in the absence of cost-related storage parameters). This removes the rationale for keeping cost limit and cost delay in shared memory. Balancing the cost limit requires only the number of active autovacuum workers vacuuming a table with no cost-based storage parameters. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com
* Separate vacuum cost variables from GUCsDaniel Gustafsson2023-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | Vacuum code run both by autovacuum workers and a backend doing VACUUM/ANALYZE previously inspected VacuumCostLimit and VacuumCostDelay, which are the global variables backing the GUCs vacuum_cost_limit and vacuum_cost_delay. Autovacuum workers needed to override these variables with their own values, derived from autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay and worker cost limit balancing logic. This led to confusing code which, in some cases, both derived and set a new value of VacuumCostLimit from VacuumCostLimit. In preparation for refreshing these GUC values more often, introduce new, independent global variables and add a function to update them using the GUCs and existing logic. Per suggestion by Kyotaro Horiguchi Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com
* Make vacuum failsafe_active globally visibleDaniel Gustafsson2023-04-07
| | | | | | | | | | | | | | | | | | | While vacuuming a table in failsafe mode, VacuumCostActive should not be re-enabled. This currently isn't a problem because vacuum cost parameters are only refreshed in between vacuuming tables and failsafe status is reset for every table. In preparation for allowing vacuum cost parameters to be updated more frequently, elevate LVRelState->failsafe_active to a global, VacuumFailsafeActive, which will be checked when determining whether or not to re-enable vacuum cost-related delays. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com
* Move various prechecks from vacuum() into ExecVacuum()David Rowley2023-04-06
| | | | | | | | | | | | | | | | | | | vacuum() is used for both the VACUUM command and for autovacuum. There were many prechecks being done inside vacuum() that were just not relevant to autovacuum. Let's move the bulk of these into ExecVacuum() so that they're only executed when running the VACUUM command. This removes a small amount of overhead when autovacuum vacuums a table. While we are at it, allocate VACUUM's BufferAccessStrategy in ExecVacuum() and pass it into vacuum() instead of expecting vacuum() to make it if it's not already made by the calling function. To make this work, we need to create the vacuum memory context slightly earlier, so we now need to pass that down to vacuum() so that it's available for use in other memory allocations. Author: Melanie Plageman Reviewed-by: David Rowley Discussion: https://postgr.es/m/20230405211534.4skgskbilnxqrmxg@awork3.anarazel.de
* Perform logical replication actions as the table owner.Robert Haas2023-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up until now, logical replication actions have been performed as the subscription owner, who will generally be a superuser. Commit cec57b1a0fbcd3833086ba686897c5883e0a2afc documented hazards associated with that situation, namely, that any user who owns a table on the subscriber side could assume the privileges of the subscription owner by attaching a trigger, expression index, or some other kind of executable code to it. As a remedy, it suggested not creating configurations where users who are not fully trusted own tables on the subscriber. Although that will work, it basically precludes using logical replication in the way that people typically want to use it, namely, to replicate a database from one node to another without necessarily having any restrictions on which database users can own tables. So, instead, change logical replication to execute INSERT, UPDATE, DELETE, and TRUNCATE operations as the table owner when they are replicated. Since this involves switching the active user frequently within a session that is authenticated as the subscription user, also impose SECURITY_RESTRICTED_OPERATION restrictions on logical replication code. As an exception, if the table owner can SET ROLE to the subscription owner, these restrictions have no security value, so don't impose them in that case. Subscription owners are now required to have the ability to SET ROLE to every role that owns a table that the subscription is replicating. If they don't, replication will fail. Superusers, who normally own subscriptions, satisfy this property by default. Non-superusers users who own subscriptions will need to be granted the roles that own relevant tables. Patch by me, reviewed (but not necessarily in its entirety) by Jelte Fennema, Jeff Davis, and Noah Misch. Discussion: http://postgr.es/m/CA+TgmoaSCkg9ww9oppPqqs+9RVqCexYCE6Aq=UsYPfnOoDeFkw@mail.gmail.com