aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Consistently use the function name CreateCheckPoint in code and comments.Amit Kapila2022-01-17
| | | | | Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACVZmKsvDjtd45+9oTcnjUJtC4LF2BYK8TpWT1f=NjJX3w@mail.gmail.com
* Introduce log_destination=jsonlogMichael Paquier2022-01-17
| | | | | | | | | | | | | | | | | | | | | | | | | | "jsonlog" is a new value that can be added to log_destination to provide logs in the JSON format, with its output written to a file, making it the third type of destination of this kind, after "stderr" and "csvlog". The format is convenient to feed logs to other applications. There is also a plugin external to core that provided this feature using the hook in elog.c, but this had to overwrite the output of "stderr" to work, so being able to do both at the same time was not possible. The files generated by this log format are suffixed with ".json", and use the same rotation policies as the other two formats depending on the backend configuration. This takes advantage of the refactoring work done previously in ac7c807, bed6ed3, 8b76f89 and 2d77d83 for the backend parts, and 72b76f7 for the TAP tests, making the addition of any new file-based format rather straight-forward. The documentation is updated to list all the keys and the values that can exist in this new format. pg_current_logfile() also required a refresh for the new option. Author: Sehrope Sarkuni, Michael Paquier Reviewed-by: Nathan Bossart, Justin Pryzby Discussion: https://postgr.es/m/CAH7T-aqswBM6JWe4pDehi1uOiufqe06DJWaU5=X7dDLyqUExHg@mail.gmail.com
* Teach hash_ok_operator() that record_eq is only sometimes hashable.Tom Lane2022-01-16
| | | | | | | | | | | The need for this was foreseen long ago, but when record_eq actually became hashable (in commit 01e658fa7), we missed updating this spot. Per bug #17363 from Elvis Pranskevichus. Back-patch to v14 where the faulty commit came in. Discussion: https://postgr.es/m/17363-f6d42fd0d726be02@postgresql.org
* Add stxdinherit flag to pg_statistic_ext_dataTomas Vondra2022-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add pg_statistic_ext_data.stxdinherit flag, so that for each extended statistics definition we can store two versions of data - one for the relation alone, one for the whole inheritance tree. This is analogous to pg_statistic.stainherit, but we failed to include such flag in catalogs for extended statistics, and we had to work around it (see commits 859b3003de, 36c4bc6e72 and 20b9fa308e). This changes the relationship between the two catalogs storing extended statistics objects (pg_statistic_ext and pg_statistic_ext_data). Until now, there was a simple 1:1 mapping - for each definition there was one pg_statistic_ext_data row, and this row was inserted while creating the statistics (and then updated during ANALYZE). With the stxdinherit flag, we don't know how many rows there will be (child relations may be added after the statistics object is defined), so there may be up to two rows. We could make CREATE STATISTICS to always create both rows, but that seems wasteful - without partitioning we only need stxdinherit=false rows, and declaratively partitioned tables need only stxdinherit=true. So we no longer initialize pg_statistic_ext_data in CREATE STATISTICS, and instead make that a responsibility of ANALYZE. Which is what we do for regular statistics too. Patch by me, with extensive improvements and fixes by Justin Pryzby. Author: Tomas Vondra, Justin Pryzby Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Build inherited extended stats on partitioned tablesTomas Vondra2022-01-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 859b3003de disabled building of extended stats for inheritance trees, to prevent updating the same catalog row twice. While that resolved the issue, it also means there are no extended stats for declaratively partitioned tables, because there are no data in the non-leaf relations. That also means declaratively partitioned tables were not affected by the issue 859b3003de addressed, which means this is a regression affecting queries that calculate estimates for the whole inheritance tree as a whole (which includes e.g. GROUP BY queries). But because partitioned tables are empty, we can invert the condition and build statistics only for the case with inheritance, without losing anything. And we can consider them when calculating estimates. It may be necessary to run ANALYZE on partitioned tables, to collect proper statistics. For declarative partitioning there should no prior statistics, and it might take time before autoanalyze is triggered. For tables partitioned by inheritance the statistics may include data from child relations (if built 859b3003de), contradicting the current code. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Ignore extended statistics for inheritance treesTomas Vondra2022-01-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 859b3003de we only build extended statistics for individual relations, ignoring the child relations. This resolved the issue with updating catalog tuple twice, but we still tried to use the statistics when calculating estimates for the whole inheritance tree. When the relations contain very distinct data, it may produce bogus estimates. This is roughly the same issue 427c6b5b9 addressed ~15 years ago, and we fix it the same way - by ignoring extended statistics when calculating estimates for the inheritance tree as a whole. We still consider extended statistics when calculating estimates for individual child relations, of course. This may result in plan changes due to different estimates, but if the old statistics were not describing the inheritance tree particularly well it's quite likely the new plans is actually better. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Unify VACUUM VERBOSE and autovacuum logging.Peter Geoghegan2022-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The log_autovacuum_min_duration instrumentation used its own dedicated code for logging, which was not reused by VACUUM VERBOSE. This was highly duplicative, and sometimes led to each code path using slightly different accounting for essentially the same information. Clean things up by making VACUUM VERBOSE reuse the same instrumentation code. This code restructuring changes the structure of the VACUUM VERBOSE output itself, but that seems like an overall improvement. The most noticeable change in VACUUM VERBOSE output is that it no longer outputs a distinct message per index per round of index vacuuming. Most of the same information (about each index) is now shown in its new per-operation summary message. This is far more legible. A few details are no longer displayed by VACUUM VERBOSE, but that's no real loss in practice, especially in the common case where we don't need multiple index scans/rounds of vacuuming. This super fine-grained information is still available via DEBUG2 messages, which might still be useful in debugging scenarios. VACUUM VERBOSE now shows new instrumentation, which is typically very useful: all of the log_autovacuum_min_duration instrumentation that it missed out on before now. This includes information about WAL overhead, buffers hit/missed/dirtied information, and I/O timing information. VACUUM VERBOSE still retains a few INFO messages of its own. This is limited to output concerning the progress of heap rel truncation, as well as some basic information about parallel workers. These details are still potentially quite useful. They aren't a good fit for the log output, which must summarize the whole operation. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-WzmW4Me7_qR4X4ka7pxP-jGmn7=Npma_-Z-9Y1eD0MQRLw@mail.gmail.com
* Allow "in place" tablespaces.Thomas Munro2022-01-15
| | | | | | | | | | | | | Provide a developer-only GUC allow_in_place_tablespaces, disabled by default. When enabled, tablespaces can be created with an empty LOCATION string, meaning that they should be created as a directory directly beneath pg_tblspc. This can be used for new testing scenarios, in a follow-up patch. Not intended for end-user usage, since it might confuse backup tools that expect symlinks. Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CA%2BhUKGKpRWQ9SxdxxDmTBCJoR0YnFpMBe7kyzY8SUQk%2BHeskxg%40mail.gmail.com
* Rename value node fieldsPeter Eisentraut2022-01-14
| | | | | | | | | For the formerly-Value node types, rename the "val" field to a name specific to the node type, namely "ival", "fval", "sval", and "bsval". This makes some code clearer and catches mixups better. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/8c1a2e37-c68d-703c-5a83-7a6077f4f997@enterprisedb.com
* Refactor AlterRole()Peter Eisentraut2022-01-14
| | | | | | | | | | | | Get rid of the three-valued logic for the Boolean variables to track whether the value was been specified and what the new value should be. Instead, we can use the "dfoo" variables to determine whether the value was specified and should be applied. This was already done in some cases, so this makes this more uniform and removes one layer of indirection. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/8c1a2e37-c68d-703c-5a83-7a6077f4f997@enterprisedb.com
* Assert redirect pointers are sensible after heap_page_prune().Andres Freund2022-01-13
| | | | | | | | | | | | | | | | | Corruption of redirect item pointers often only becomes visible well after being corrupted, as e.g. bug #17255 shows: In the original reproducer, gigabyte of WAL were between the source of the corruption and the corruption becoming visible. To make it easier to find / prevent such bugs, verify whether redirect pointers are sensible at the end of heap_page_prune_execute(). 5cd7eb1f1c32 introduced related assertions while modifying the page, but they can't easily detect marking the target of an existing redirect as unused. Sometimes the corruption will be detected later, but that's harder to diagnose. Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de
* Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning.Andres Freund2022-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
* Fix ruleutils.c's dumping of whole-row Vars in more contexts.Tom Lane2022-01-13
| | | | | | | | | | | | | | | | | | Commit 7745bc352 intended to ensure that whole-row Vars would be printed with "::type" decoration in all contexts where plain "var.*" notation would result in star-expansion, notably in ROW() and VALUES() constructs. However, it missed the case of INSERT with a single-row VALUES, as reported by Timur Khanjanov. Nosing around ruleutils.c, I found a second oversight: the code for RowCompareExpr generates ROW() notation without benefit of an actual RowExpr, and naturally it wasn't in sync :-(. (The code for FieldStore also does this, but we don't expect that to generate strictly parsable SQL anyway, so I left it alone.) Back-patch to all supported branches. Discussion: https://postgr.es/m/efaba6f9-4190-56be-8ff2-7a1674f9194f@intrans.baku.az
* Improve error handling of HMAC computationsMichael Paquier2022-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is similar to b69aba7, except that this completes the work for HMAC with a new routine called pg_hmac_error() that would provide more context about the type of error that happened during a HMAC computation: - The fallback HMAC implementation in hmac.c relies on cryptohashes, so in some code paths it is necessary to return back the error generated by cryptohashes. - For the OpenSSL implementation (hmac_openssl.c), the logic is very similar to cryptohash_openssl.c, where the error context comes from OpenSSL if one of its internal routines failed, with different error codes if something internal to hmac_openssl.c failed or was incorrect. Any in-core code paths that use the centralized HMAC interface are related to SCRAM, for errors that are unlikely going to happen, with only SHA-256. It would be possible to see errors when computing some HMACs with MD5 for example and OpenSSL FIPS enabled, and this commit would help in reporting the correct errors but nothing in core uses that. So, at the end, no backpatch to v14 is done, at least for now. Errors in SCRAM related to the computation of the server key, stored key, etc. need to pass down the potential error context string across more layers of their respective call stacks for the frontend and the backend, so each surrounding routine is adapted for this purpose. Reviewed-by: Sergey Shinderuk Discussion: https://postgr.es/m/Yd0N9tSAIIkFd+qi@paquier.xyz
* Fix memory leak in indexUnchanged hint mechanism.Peter Geoghegan2022-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 9dc718bd added a "logically unchanged by UPDATE" hinting mechanism, which is currently used within nbtree indexes only (see commit d168b666). This mechanism determined whether or not the incoming item is a logically unchanged duplicate (a duplicate needed only for MVCC versioning purposes) once per row updated per non-HOT update. This approach led to memory leaks which were noticeable with an UPDATE statement that updated sufficiently many rows, at least on tables that happen to have an expression index. On HEAD, fix the issue by adding a cache to the executor's per-index IndexInfo struct. Take a different approach on Postgres 14 to avoid an ABI break: simply pass down the hint to all indexes unconditionally with non-HOT UPDATEs. This is deemed acceptable because the hint is currently interpreted within btinsert() as "perform a bottom-up index deletion pass if and when the only alternative is splitting the leaf page -- prefer to delete any LP_DEAD-set items first". nbtree must always treat the hint as a noisy signal about what might work, as a strategy of last resort, with costs imposed on non-HOT updaters. (The same thing might not be true within another index AM that applies the hint, which is why the original behavior is preserved on HEAD.) Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Klaudie Willis <Klaudie.Willis@protonmail.com> Diagnosed-By: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/261065.1639497535@sss.pgh.pa.us Backpatch: 14-, where the hinting mechanism was added.
* vacuumlazy.c: fix "garbage tuples" reference.Peter Geoghegan2022-01-12
| | | | Another minor oversight in commit 4f8d9d12.
* Consider fractional paths in generate_orderedappend_pathsTomas Vondra2022-01-12
| | | | | | | | | | | | | | | | | | | When building append paths, we've been looking only at startup and total costs for the paths. When building fractional paths that may eliminate the cheapest one, because it may be dominated by two separate paths (one for startup, one for total cost). This extends generate_orderedappend_paths() to also consider which paths have lowest fractional cost. Currently we only consider paths matching pathkeys - in the future this may be improved by also considering paths that are only partially sorted, with an incremental sort on top. Original report of an issue by Arne Roland, patch by me (based on a suggestion by Tom Lane). Reviewed-by: Arne Roland, Zhihong Yu Discussion: https://postgr.es/m/e8f9ec90-546d-e948-acce-0525f3e92773%40enterprisedb.com Discussion: https://postgr.es/m/1581042da8044e71ada2d6e3a51bf7bb%40index.de
* Add index on pg_publication_rel.prpubidAlvaro Herrera2022-01-12
| | | | | | | | | | This should have been added for the benefit of GetPublicationRelations; let's add it now. I couldn't measure a performance difference in the TAP tests, but that may be because the tests use very few publications. Discussion: https://postgr.es/m/202201120041.p24wvsfcsope@alvherre.pgsql
* Move any code specific to log_destination=csvlog to its own fileMichael Paquier2022-01-12
| | | | | | | | | | | | The recent refactoring done in ac7c807 makes this move possible and simple, as this just moves some code around. This reduces the size of elog.c by 7%. Author: Michael Paquier, Sehrope Sarkuni Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/CAH7T-aqswBM6JWe4pDehi1uOiufqe06DJWaU5=X7dDLyqUExHg@mail.gmail.com simply moves the routines related to csvlog into their own file
* Refactor set of routines specific to elog.cMichael Paquier2022-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | This refactors the following routines and facilities coming from elog.c, to ease their use across multiple log destinations: - Start timestamp, including its reset, to store when a process has been started. - The log timestamp, associated to an entry (the same timestamp is used when logging across multiple destinations). - Routine deciding if a query can be logged or not. - The backend type names, depending on the process that logs any information (postmaster, bgworker name or just GetBackendTypeDesc() with a regular backend). - Write of logs using the logging piped protocol, with the log collector enabled. - Error severity converted to a string. These refactored routines will be used for some follow-up changes to move all the csvlog logic into its own file and to potentially add JSON as log destination, reducing the overall size of elog.c as the end result. Author: Michael Paquier, Sehrope Sarkuni Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/CAH7T-aqswBM6JWe4pDehi1uOiufqe06DJWaU5=X7dDLyqUExHg@mail.gmail.com
* Improve error message for missing extension.Tom Lane2022-01-11
| | | | | | | | | | | | | | If we get ENOENT while trying to read an extension control file, report that as a missing extension (with a HINT to install it) rather than as a filesystem access problem. The message wording was extensively bikeshedded in hopes of pointing people to the idea that they need to do a software installation before they can install the extension into the current database. Nathan Bossart, with review/wording suggestions from Daniel Gustafsson, Chapman Flack, and myself Discussion: https://postgr.es/m/3950D56A-4E47-48E7-BF9B-F5F22E268BE7@amazon.com
* Clean up messy API for src/port/thread.c.Tom Lane2022-01-11
| | | | | | | | | | | | | | | | | | | | | | | The point of this patch is to reduce inclusion spam by not needing to #include <netdb.h> or <pwd.h> in port.h (which is read by every compile in our tree). To do that, we must remove port.h's declarations of pqGetpwuid and pqGethostbyname. pqGethostbyname is only used, and is only ever likely to be used, in src/port/getaddrinfo.c --- which isn't even built on most platforms, making pqGethostbyname dead code for most people. Hence, deal with that by just moving it into getaddrinfo.c. To clean up pqGetpwuid, invent a couple of simple wrapper functions with less-messy APIs. This allows removing some duplicate error-handling code, too. In passing, remove thread.c from the MSVC build, since it contains nothing we use on Windows. Noted while working on 376ce3e40. Discussion: https://postgr.es/m/1634252654444.90107@mit.edu
* Improve warning message in pg_signal_backend()John Naylor2022-01-11
| | | | | | | | | | | | Previously, invoking pg_terminate_backend() or pg_cancel_backend() with the postmaster PID produced a "PID XXXX is not a PostgresSQL server process" warning, which does not make sense. Change to "backend process" to make the message more exact. Nathan Bossart, based on an idea from Bharath Rupireddy with input from Tom Lane and Euler Taveira Discussion: https://www.postgresql.org/message-id/flat/CALj2ACW7Rr-R7mBcBQiXWPp=JV5chajjTdudLiF5YcpW-BmHhg@mail.gmail.com
* Enhance pg_log_backend_memory_contexts() for auxiliary processes.Fujii Masao2022-01-11
| | | | | | | | | | | | | | | | | | Previously pg_log_backend_memory_contexts() could request to log the memory contexts of backends, but not of auxiliary processes such as checkpointer. This commit enhances the function so that it can also send the request to auxiliary processes. It's useful to look at the memory contexts of those processes for debugging purpose and better understanding of the memory usage pattern of them. Note that pg_log_backend_memory_contexts() cannot send the request to logger or statistics collector. Because this logging request mechanism is based on shared memory but those processes aren't connected to that. Author: Bharath Rupireddy Reviewed-by: Vignesh C, Kyotaro Horiguchi, Fujii Masao Discussion: https://postgr.es/m/CALj2ACU1nBzpacOK2q=a65S_4+Oaz_rLTsU1Ri0gf7YUmnmhfQ@mail.gmail.com
* Fix typo in rewriteheap.c.Amit Kapila2022-01-11
| | | | | Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACW7SvfFW8r2uKH6oQm1kNpt8aQMG61kSBPK0S2PHhFbMw@mail.gmail.com
* Improve error handling of cryptohash computationsMichael Paquier2022-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
* Rename functions to avoid future conflictsPeter Eisentraut2022-01-10
| | | | | | | | | Rename range_serialize/range_deserialize to brin_range_serialize/brin_range_deserialize, since there are already public range_serialize/range_deserialize in rangetypes.h. Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Discussion: https://www.postgresql.org/message-id/CA+renyX0ipvY6A_jUOHeB1q9mL4bEYfAZ5FBB7G7jUo5bykjrA@mail.gmail.com
* Make pg_get_expr() more bulletproof.Tom Lane2022-01-09
| | | | | | | | | | | | | | | | | | | | | | Since this function is defined to accept pg_node_tree values, it could get applied to any nodetree that can appear in a cataloged pg_node_tree column. Some such cases can't be supported --- for example, its API doesn't allow providing referents for more than one relation --- but we should try to throw a user-facing error rather than an internal error when encountering such a case. In support of this, extend expression_tree_walker/mutator to be sure they'll work on any such node tree (which basically means adding support for relpartbound node types). That allows us to run pull_varnos and check for the case of multiple relations before we start processing the tree. The alternative of changing the low-level error thrown for an out-of-range varno isn't appealing, because that could mask actual bugs in other usages of ruleutils. Per report from Justin Pryzby. This is basically cosmetic, so no back-patch. Discussion: https://postgr.es/m/20211219205422.GT17618@telsasoft.com
* More cleanup of a2ab9c06ea.Jeff Davis2022-01-08
| | | | | | | | | | | | | | Require SELECT privileges when performing UPDATE or DELETE, to be consistent with the way a normal UPDATE or DELETE command works. Simplify subscription test it so that it runs faster. Also, wait for initial table sync to complete to avoid intermittent failures. Minor doc fixup. Discussion: https://postgr.es/m/CAA4eK1L3-qAtLO4sNGaNhzcyRi_Ufmh2YPPnUjkROBK0tN%3Dx%3Dg%40mail.gmail.com Discussion: https://postgr.es/m/1514479.1641664638%40sss.pgh.pa.us Discussion: https://postgr.es/m/Ydkfj5IsZg7mQR0g@paquier.xyz
* Respect permissions within logical replication.Jeff Davis2022-01-07
| | | | | | | | | | | | | | | | | | | | Prevent logical replication workers from performing insert, update, delete, truncate, or copy commands on tables unless the subscription owner has permission to do so. Prevent subscription owners from circumventing row-level security by forbidding replication into tables with row-level security policies which the subscription owner is subject to, without regard to whether the policy would ordinarily allow the INSERT, UPDATE, DELETE or TRUNCATE which is being replicated. This seems sufficient for now, as superusers, roles with bypassrls, and target table owners should still be able to replicate despite RLS policies. We can revisit the question of applying row-level security policies on a per-row basis if this restriction proves too severe in practice. Author: Mark Dilger Reviewed-by: Jeff Davis, Andrew Dunstan, Ronan Dunklau Discussion: https://postgr.es/m/9DFC88D3-1300-4DE8-ACBC-4CEF84399A53%40enterprisedb.com
* Update copyright for 2022Bruce Momjian2022-01-07
| | | | Backpatch-through: 10
* Prevent altering partitioned table's rowtype, if it's used elsewhere.Tom Lane2022-01-06
| | | | | | | | | | | | | | We disallow altering a column datatype within a regular table, if the table's rowtype is used as a column type elsewhere, because we lack code to go around and rewrite the other tables. This restriction should apply to partitioned tables as well, but it was not checked because ATRewriteTables and ATPrepAlterColumnType were not on the same page about who should do it for which relkinds. Per bug #17351 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/17351-6db1870f3f4f612a@postgresql.org
* Create foreign key triggers in partitioned tables tooAlvaro Herrera2022-01-05
| | | | | | | | | | | | | | | | | | | | | | | | | | While user-defined triggers defined on a partitioned table have a catalog definition for both it and its partitions, internal triggers used by foreign keys defined on partitioned tables only have a catalog definition for its partitions. This commit fixes that so that partitioned tables get the foreign key triggers too, just like user-defined triggers. Moreover, like user-defined triggers, partitions' internal triggers will now also have their tgparentid set appropriately. This is to allow subsequent commit(s) to make the foreign key related events to be fired in some cases using the parent table triggers instead of those of partitions'. This also changes what tgisinternal means in some cases. Currently, it means either that the trigger is an internal implementation object of a foreign key constraint, or a "child" trigger on a partition cloned from the trigger on the parent. This commit changes it to only mean the former to avoid confusion. As for the latter, it can be told by tgparentid being nonzero, which is now true both for user- defined and foreign key's internal triggers. Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Arne Roland <A.Roland@index.de> Discussion: https://postgr.es/m/CA+HiwqG7LQSK+n8Bki8tWv7piHD=PnZro2y6ysU2-28JS6cfgQ@mail.gmail.com
* Reduce relcache access in WAL sender streaming logical changesMichael Paquier2022-01-05
| | | | | | | | | | | | | | | | | | | | | | get_rel_sync_entry(), which is called each time a change needs to be logically replicated, is a rather hot code path in the WAL sender sending logical changes. This code path was doing a relcache access on relkind and relpartition for each logical change, but we only need to know this information when building or re-building the cached information for a relation. Some measurements prove that this is noticeable in perf profiles, particularly when attempting to replicate changes from relations that are not published as these cause less overhead in the WAL sender, delaying further the replication of changes for relations that are published. Issue introduced in 83fd453. Author: Hou Zhijie Reviewed-by: Kyotaro Horiguchi, Euler Taveira Discussion: https://postgr.es/m/OS0PR01MB5716E863AA9E591C1F010F7A947D9@OS0PR01MB5716.jpnprd01.prod.outlook.com Backpatch-through: 13
* Remove redundant initialization of BrinMemTuple.Tom Lane2022-01-04
| | | | | | | | | brin_new_memtuple already did this, so there's no need for initialize_brin_buildstate to do it again. Richard Guo, reviewed by Bharath Rupireddy Discussion: https://postgr.es/m/CAMbWs4-kYYpKNOdiWtsCZ3jbkFFj4nhOVH22JH7dsrMYX=aGjg@mail.gmail.com
* Fix silly mistake in AssertAlvaro Herrera2022-01-04
|
* Allow special SKIP LOCKED condition in Assert()Alvaro Herrera2022-01-04
| | | | | | | | | | | | | | | | | Under concurrency, it is possible for two sessions to be merrily locking and releasing a tuple and marking it again as HEAP_XMAX_INVALID all the while a third session attempts to lock it, miserably fails at it, and then contemplates life, the universe and everything only to eventually fail an assertion that said bit is not set. Before SKIP LOCKED that was indeed a reasonable expectation, but alas! commit df630b0dd5ea falsified it. This bug is as old as time itself, and even older, if you think time begins with the oldest supported branch. Therefore, backpatch to all supported branches. Author: Simon Riggs <simon.riggs@enterprisedb.com> Discussion: https://postgr.es/m/CANbhV-FeEwMnN8yuMyss7if1ZKjOKfjcgqB26n8pqu1e=q0ebg@mail.gmail.com
* Handle mixed returnable and non-returnable columns better in IOS.Tom Lane2022-01-03
| | | | | | | | | | | | | | | We can revert the code changes of commit b5febc1d1 now, because commit 9a3ddeb51 installed a real solution for the difficulty that b5febc1d1 just dodged, namely that the planner might pick the wrong one of several index columns nominally containing the same value. It only matters which one we pick if we pick one that's not returnable, and that mistake is now foreclosed. Although both of the aforementioned commits were back-patched, I don't feel a need to take any risk by back-patching this one. The cases that it improves are very corner-ish. Discussion: https://postgr.es/m/3179992.1641150853@sss.pgh.pa.us
* Fix index-only scan plans, take 2.Tom Lane2022-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4ace45677 failed to fix the problem fully, because the same issue of attempting to fetch a non-returnable index column can occur when rechecking the indexqual after using a lossy index operator. Moreover, it broke EXPLAIN for such indexquals (which indicates a gap in our test cases :-(). Revert the code changes of 4ace45677 in favor of adding a new field to struct IndexOnlyScan, containing a version of the indexqual that can be executed against the index-returned tuple without using any non-returnable columns. (The restrictions imposed by check_index_only guarantee this is possible, although we may have to recompute indexed expressions.) Support construction of that during setrefs.c processing by marking IndexOnlyScan.indextlist entries as resjunk if they can't be returned, rather than removing them entirely. (We could alternatively require setrefs.c to look up the IndexOptInfo again, but abusing resjunk this way seems like a reasonably safe way to avoid needing to do that.) This solution isn't great from an API-stability standpoint: if there are any extensions out there that build IndexOnlyScan structs directly, they'll be broken in the next minor releases. However, only a very invasive extension would be likely to do such a thing. There's no change in the Path representation, so typical planner extensions shouldn't have a problem. As before, back-patch to all supported branches. Discussion: https://postgr.es/m/3179992.1641150853@sss.pgh.pa.us Discussion: https://postgr.es/m/17350-b5bdcf476e5badbb@postgresql.org
* Clean up error messages related to bad datetime units.Tom Lane2022-01-03
| | | | | | | | | | | | | Adjust the error texts used for unrecognized/unsupported datetime units so that there are just two strings to translate, not two per datatype. Along the way, follow our usual error message style of not double-quoting type names, and instead making sure that we say the name is a type. Fix a couple of places in date.c that were using the wrong one of "unrecognized" and "unsupported". Nikhil Benesch, with a bit more editing by me Discussion: https://postgr.es/m/CAPWqQZTURGixmbMH2_Z3ZtWGA0ANjUb9bwtkkxSxSfDeFHuM6Q@mail.gmail.com
* Use MaxLockMode symbol in more places.Tom Lane2022-01-03
| | | | | | | | | As long as we have this macro, it makes sense to use it in the LockMethodData structures. Julien Rouhaud Discussion: https://postgr.es/m/20220103064722.ewdv4evlez5m7mdn@jrouhaud
* Avoid using DefElemAction in AlterPublicationStmtAlvaro Herrera2022-01-03
| | | | | | | Create a new enum type for it. This allows to add new values for future functionality without disrupting unrelated uses of DefElem. Discussion: https://postgr.es/m/202112302021.ca7ihogysgh3@alvherre.pgsql
* Fix index-only scan plans when not all index columns can be returned.Tom Lane2022-01-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an index has both returnable and non-returnable columns, and one of the non-returnable columns is an expression using a Var that is in a returnable column, then a query returning that expression could result in an index-only scan plan that attempts to read the non-returnable column, instead of recomputing the expression from the returnable column as intended. To fix, redefine the "indextlist" list of an IndexOnlyScan plan node as containing null Consts in place of any non-returnable columns. This solves the problem by preventing setrefs.c from falsely matching to such entries. The executor is happy since it only cares about the exposed types of the entries, and ruleutils.c doesn't care because a correct plan won't reference those entries. I considered some other ways to prevent setrefs.c from doing the wrong thing, but this way seems good since (a) it allows a very localized fix, (b) it makes the indextlist structure more compact in many cases, and (c) the indextlist is now a more faithful representation of what the index AM will actually produce, viz. nulls for any non-returnable columns. This is easier to hit since we introduced included columns, but it's possible to construct failing examples without that, as per the added regression test. Hence, back-patch to all supported branches. Per bug #17350 from Louis Jachiet. Discussion: https://postgr.es/m/17350-b5bdcf476e5badbb@postgresql.org
* Small cleanups related to PUBLICATION framework codeAlvaro Herrera2021-12-30
| | | | Discussion: https://postgr.es/m/202112302021.ca7ihogysgh3@alvherre.pgsql
* Revert b2a459edf "Fix GRANTED BY support in REVOKE ROLE statements"Daniel Gustafsson2021-12-30
| | | | | | | | | | | The reverted commit attempted to fix SQL specification compliance for the cases which 6aaaa76bb left. This however broke existing behavior which takes precedence over spec compliance so revert. The introduced tests are left after the revert since the codepath isn't well covered. Per bug report 17346. Backpatch down to 14 where it was introduced. Reported-by: Andrew Bille <andrewbille@gmail.com> Discussion: https://postgr.es/m/17346-f72b28bd1a341060@postgresql.org
* Fix issues in pgarch's new directory-scanning logic.Tom Lane2021-12-29
| | | | | | | | | | | | | | | | | | The arch_filenames[] array elements were one byte too small, so that a maximum-length filename would get corrupted if another entry were made after it. (Noted by Thomas Munro, fix by Nathan Bossart.) Move these arrays into a palloc'd struct, so that we aren't wasting a few kilobytes of static data in each non-archiver process. Add a binaryheap_reset() call to make it plain that we start the directory scan with an empty heap. I don't think there's any live bug of that sort, but it seems fragile, and this is very cheap insurance. Cleanup for commit beb4e9ba1, so no back-patch needed. Discussion: https://postgr.es/m/CA+hUKGLHAjHuKuwtzsW7uMJF4BVPcQRL-UMZG_HM-g0y7yLkUg@mail.gmail.com
* Fix incorrect format placeholdersPeter Eisentraut2021-12-29
|
* Revert changes about warnings/errors for placeholders.Tom Lane2021-12-27
| | | | | | | | Revert commits 5609cc01c, 2ed8a8cc5, and 75d22069e until we have a less broken idea of how this should work in parallel workers. Per buildfarm. Discussion: https://postgr.es/m/1640909.1640638123@sss.pgh.pa.us
* Rename EmitWarningsOnPlaceholders() to MarkGUCPrefixReserved().Tom Lane2021-12-27
| | | | | | | | | This seems like a clearer name for what it does now. Provide a compatibility macro so that extensions don't have to convert to the new name right away. Discussion: https://postgr.es/m/116024.1640111629@sss.pgh.pa.us
* Rethink handling of settings with a prefix reserved by an extension.Tom Lane2021-12-27
| | | | | | | | | | | | Commit 75d22069e made SET print a warning if you tried to set an unrecognized parameter within namespace previously reserved by an extension. It seems better for that to be an outright error though, for the same reason that we don't let you set unrecognized unqualified parameter names. In any case, the preceding implementation was inefficient and erroneous. Perform the check in a more appropriate spot, and be more careful about prefix-match cases. Discussion: https://postgr.es/m/116024.1640111629@sss.pgh.pa.us