diff options
author | Michael Paquier <michael@paquier.xyz> | 2023-01-31 12:47:08 +0900 |
---|---|---|
committer | Michael Paquier <michael@paquier.xyz> | 2023-01-31 12:47:08 +0900 |
commit | c5b2975ec183e8776f82bad33ec957ce58ec709a (patch) | |
tree | 6e8da8504e2ddb9f4a43fa72459032fa6505890c /src | |
parent | 4785af9e6318856d45e51fbc328d52f6c5340e13 (diff) | |
download | postgresql-c5b2975ec183e8776f82bad33ec957ce58ec709a.tar.gz postgresql-c5b2975ec183e8776f82bad33ec957ce58ec709a.zip |
Remove recovery test 011_crash_recovery.pl
This test has been added as of 857ee8e that has introduced the SQL
function txid_status(), with the purpose of checking that a transaction
ID still in-progress during a crash is correctly marked as aborted after
recovery finishes.
This test is unstable, and some configuration scenarios may that easier
to reproduce (wal_level=minimal, wal_compression=on) because the WAL
holding the information about the in-progress transaction ID may not
have made it to disk yet, hence a post-crash recovery may cause the same
XID to be reused, triggering a test failure.
We have discussed a few approaches, like making this function force a
WAL flush to make it reliable across crashes, but we don't want to pay a
performance penalty in some scenarios, as well. The test could have
been tweaked to enforce a checkpoint but that actually breaks the
promise of the test to rely on a stable result of txid_status() after
a crash.
This issue has been reported a few times across the past years, with an
original report from Kyotaro Horiguchi. The buildfarm machines tanager,
hachi and gokiburi enable wal_compression, and fail on this test
periodically.
Discussion: https://postgr.es/m/3163112.1674762209@sss.pgh.pa.us
Discussion: https://postgr.es/m/20210305.115011.558061052471425531.horikyota.ntt@gmail.com
Backpatch-through: 11
Diffstat (limited to 'src')
-rw-r--r-- | src/test/recovery/t/011_crash_recovery.pl | 63 |
1 files changed, 0 insertions, 63 deletions
diff --git a/src/test/recovery/t/011_crash_recovery.pl b/src/test/recovery/t/011_crash_recovery.pl deleted file mode 100644 index 1b57d01046d..00000000000 --- a/src/test/recovery/t/011_crash_recovery.pl +++ /dev/null @@ -1,63 +0,0 @@ - -# Copyright (c) 2021-2022, PostgreSQL Global Development Group - -# -# Tests relating to PostgreSQL crash recovery and redo -# -use strict; -use warnings; -use PostgreSQL::Test::Cluster; -use PostgreSQL::Test::Utils; -use Test::More; - -my $node = PostgreSQL::Test::Cluster->new('primary'); -$node->init(allows_streaming => 1); -$node->start; - -my ($stdin, $stdout, $stderr) = ('', '', ''); - -# Ensure that pg_xact_status reports 'aborted' for xacts -# that were in-progress during crash. To do that, we need -# an xact to be in-progress when we crash and we need to know -# its xid. -my $tx = IPC::Run::start( - [ - 'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d', - $node->connstr('postgres') - ], - '<', - \$stdin, - '>', - \$stdout, - '2>', - \$stderr); -$stdin .= q[ -BEGIN; -CREATE TABLE mine(x integer); -SELECT pg_current_xact_id(); -]; -$tx->pump until $stdout =~ /[[:digit:]]+[\r\n]$/; - -# Status should be in-progress -my $xid = $stdout; -chomp($xid); - -is($node->safe_psql('postgres', qq[SELECT pg_xact_status('$xid');]), - 'in progress', 'own xid is in-progress'); - -# Crash and restart the postmaster -$node->stop('immediate'); -$node->start; - -# Make sure we really got a new xid -cmp_ok($node->safe_psql('postgres', 'SELECT pg_current_xact_id()'), - '>', $xid, 'new xid after restart is greater'); - -# and make sure we show the in-progress xact as aborted -is($node->safe_psql('postgres', qq[SELECT pg_xact_status('$xid');]), - 'aborted', 'xid is aborted after crash'); - -$stdin .= "\\q\n"; -$tx->finish; # wait for psql to quit gracefully - -done_testing(); |