diff options
author | Tom Lane <tgl@sss.pgh.pa.us> | 2015-11-21 20:21:32 -0500 |
---|---|---|
committer | Tom Lane <tgl@sss.pgh.pa.us> | 2015-11-21 20:22:39 -0500 |
commit | 7acad954639df1c44877a37671339b8a7efce8c7 (patch) | |
tree | dd944db99b86c5253c1051ba807c7307b7a7625a /src/backend/replication/basebackup.c | |
parent | b29a40fea78ec003146f1aa57aa60413cd560c45 (diff) | |
download | postgresql-7acad954639df1c44877a37671339b8a7efce8c7.tar.gz postgresql-7acad954639df1c44877a37671339b8a7efce8c7.zip |
Adopt the GNU convention for handling tar-archive members exceeding 8GB.
The POSIX standard for tar headers requires archive member sizes to be
printed in octal with at most 11 digits, limiting the representable file
size to 8GB. However, GNU tar and apparently most other modern tars
support a convention in which oversized values can be stored in base-256,
allowing any practical file to be a tar member. Adopt this convention
to remove two limitations:
* pg_dump with -Ft output format failed if the contents of any one table
exceeded 8GB.
* pg_basebackup failed if the data directory contained any file exceeding
8GB. (This would be a fatal problem for installations configured with a
table segment size of 8GB or more, and it has also been seen to fail when
large core dump files exist in the data directory.)
File sizes under 8GB are still printed in octal, so that no compatibility
issues are created except in cases that would have failed entirely before.
In addition, this patch fixes several bugs in the same area:
* In 9.3 and later, we'd defined tarCreateHeader's file-size argument as
size_t, which meant that on 32-bit machines it would write a corrupt tar
header for file sizes between 4GB and 8GB, even though no error was raised.
This broke both "pg_dump -Ft" and pg_basebackup for such cases.
* pg_restore from a tar archive would fail on tables of size between 4GB
and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits.
This happened even with an archive file not affected by the previous bug.
* pg_basebackup would fail if there were files of size between 4GB and 8GB,
even on 64-bit machines.
* In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size,
on 64-bit big-endian machines.
In view of these potential data-loss bugs, back-patch to all supported
branches, even though removal of the documented 8GB limit might otherwise
be considered a new feature rather than a bug fix.
Diffstat (limited to 'src/backend/replication/basebackup.c')
-rw-r--r-- | src/backend/replication/basebackup.c | 18 |
1 files changed, 1 insertions, 17 deletions
diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index de019edb24e..f26c42c1f37 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -748,7 +748,7 @@ SendBackupHeader(List *tablespaces) } else { - Size len; + Size len; len = strlen(ti->oid); pq_sendint(&buf, len, 4); @@ -1165,13 +1165,6 @@ sendDir(char *path, int basepathlen, bool sizeonly, List *tablespaces) /* - * Maximum file size for a tar member: The limit inherent in the - * format is 2^33-1 bytes (nearly 8 GB). But we don't want to exceed - * what we can represent in pgoff_t. - */ -#define MAX_TAR_MEMBER_FILELEN (((int64) 1 << Min(33, sizeof(pgoff_t)*8 - 1)) - 1) - -/* * Given the member, write the TAR header & send the file. * * If 'missing_ok' is true, will not throw an error if the file is not found. @@ -1199,15 +1192,6 @@ sendFile(char *readfilename, char *tarfilename, struct stat * statbuf, errmsg("could not open file \"%s\": %m", readfilename))); } - /* - * Some compilers will throw a warning knowing this test can never be true - * because pgoff_t can't exceed the compared maximum on their platform. - */ - if (statbuf->st_size > MAX_TAR_MEMBER_FILELEN) - ereport(ERROR, - (errmsg("archive member \"%s\" too large for tar format", - tarfilename))); - _tarWriteHeader(tarfilename, NULL, statbuf); while ((cnt = fread(buf, 1, Min(sizeof(buf), statbuf->st_size - len), fp)) > 0) |