aboutsummaryrefslogtreecommitdiff
path: root/contrib/pg_dumplo/README.pg_dumplo
diff options
context:
space:
mode:
Diffstat (limited to 'contrib/pg_dumplo/README.pg_dumplo')
-rw-r--r--contrib/pg_dumplo/README.pg_dumplo148
1 files changed, 0 insertions, 148 deletions
diff --git a/contrib/pg_dumplo/README.pg_dumplo b/contrib/pg_dumplo/README.pg_dumplo
deleted file mode 100644
index a7d6625d203..00000000000
--- a/contrib/pg_dumplo/README.pg_dumplo
+++ /dev/null
@@ -1,148 +0,0 @@
-$PostgreSQL: pgsql/contrib/pg_dumplo/README.pg_dumplo,v 1.3 2003/11/29 19:51:35 pgsql Exp $
-
-pg_dumplo - PostgreSQL large object dumper
-==========================================
-
-By Karel Zak <zakkr@zf.jcu.cz>
-
-
-Compilation:
-===========
-
- * run master ./configure in the PG source top directory
- * gmake all
- * gmake install
-
-THANKS:
-======
-
- <??? I lost his e-mail ???>
- * option '--all' and pg_class usage
-
- Pavel Janík ml. <Pavel.Janik@linux.cz>
- * HOWTO (the rest of this file)
-
-
-How to use pg_dumplo?
-=====================
-
-(c) 2000, Pavel Janík ml. <Pavel.Janik@linux.cz>
-
-
-Q: How do you use pg_dumplo?
-============================
-
-A: This is a small demo of backing up the database table with Large Objects:
-
-
-We will create a demo database and a small and useless table `lo' inside
-it:
-
-SnowWhite:$ createdb test
-CREATE DATABASE
-
-Ok, our database with the name 'test' is created. Now we should create demo
-table which will contain only one column with the name 'id' which will hold
-the OID number of a Large Object:
-
-SnowWhite:$ psql test
-Welcome to psql, the PostgreSQL interactive terminal.
-
-Type: \copyright for distribution terms
- \h for help with SQL commands
- \? for help on internal slash commands
- \g or terminate with semicolon to execute query
- \q to quit
-
-test=# CREATE TABLE lo (id oid);
-CREATE
-test=# \lo_import /etc/aliases
-lo_import 19338
-test=# INSERT INTO lo VALUES (19338);
-INSERT 19352 1
-test=# select * from lo;
- id
--------
- 19338
-(1 row)
-
-test=# \q
-
-In the above example you can see that we have also imported one "Large
-Object" - the file /etc/aliases. It has an oid of 19338 so we have inserted
-this oid number to the database table lo to the column id. The final SELECT
-shows that we have one record in the table.
-
-Now we can demonstrate the work of pg_dumplo. We will create a dump directory
-which will contain the whole dump of large objects (/tmp/dump):
-
-mkdir -p /tmp/dump
-
-Now we can dump all large objects from the database `test' which have OIDs
-stored in the column `id' in the table `lo':
-
-SnowWhite:$ pg_dumplo -s /tmp/dump -d test -l lo.id
-pg_dumplo: dump lo.id (1 large obj)
-
-Voila, we have the dump of all Large Objects in our directory:
-
-SnowWhite:$ tree /tmp/dump/
-/tmp/dump/
-`-- test
- |-- lo
- | `-- id
- | `-- 19338
- `-- lo_dump.index
-
-3 directories, 2 files
-SnowWhite:$
-
-In practice, we'd probably use
-
-SnowWhite:$ pg_dumplo -s /tmp/dump -d test -e
-
-to export all large objects that are referenced by any OID-type column
-in the database. Calling out specific column(s) with -l is only needed
-for a selective dump.
-
-For routine backup purposes, the dump directory could now be converted into
-an archive file with tar and stored on tape. Notice that a single dump
-directory can hold the dump of multiple databases.
-
-Now, how can we recreate the contents of the table lo and the Large Object
-database when something went wrong? To do this, we expect that pg_dump is
-also used to store the definition and contents of the regular tables in
-the database.
-
-SnowWhite:$ pg_dump test >test.backup
-
-Now, if we lose the database:
-
-SnowWhite:$ dropdb test
-DROP DATABASE
-
-we can recreate it and reload the regular tables from the dump file:
-
-SnowWhite:$ createdb test
-CREATE DATABASE
-
-SnowWhite:$ psql test <test.backup
-
-But at this point our database has no large objects in it. What's more,
-the large-object-referencing columns contain the OIDs of the old large
-objects, which will not be the OIDs they'll have when reloaded. Never
-fear: pg_dumplo will fix the large object references at the same time
-it reloads the large objects. We reload the LO data from the dump
-directory like this:
-
-SnowWhite:$ pg_dumplo -s /tmp/dump -d test -i
-19338 lo id test/lo/id/19338
-SnowWhite:$
-
-And this is everything. The contents of table lo will be automatically
-updated to refer to the new large object OIDs.
-
-Summary: In this small example we have shown that pg_dumplo can be used to
-completely dump the database's Large Objects very easily.
-
-For more information see the help ( pg_dumplo -h ).