August 10th, 2012, 04:48 AM
PG_DUMP vs PG_DUMPALL
I have a postgres database with about 2500 schema and I am trying to back up the database. Since PG_DUMPALL takes a really long time to complete I am trying to use PG_DUMP to selectively dump subsets of schemas in parallel like the following:
This seems to work fine but the only concern I have is that the size of the file dumped by PG_DUMPALL is not equal to the sum of file sizes of all the individual files dumped by the various PG_DUMPs as shown above. The difference in the size is about 23MB.
pg_dump -U postgres cust_db -c -n 'cust_100*' > /backup/dump/cust_100.sql &
pg_dump -U postgres cust_db -c -n 'cust_200*' > /backup/dump/cust_200.sql &
As per the postgres documentation (below) PG_DUMPALL dumps additional objects that PG_DUMP does not. Is there a way to just dump the global objects so that I can restore it along with the restoration of the individual dump files shown above.
I'd appreciate if someone can provide some insights. Thanks.