Hi,
I have a postgres database with about 2500 schema and I am trying to back up the database. Since PG_DUMPALL takes a really long time to complete I am trying to use PG_DUMP to selectively dump subsets of schemas in parallel like the following:

Code:
pg_dump -U postgres cust_db -c -n 'cust_100*' > /backup/dump/cust_100.sql &
pg_dump -U postgres cust_db -c -n 'cust_200*' > /backup/dump/cust_200.sql &
..
..
This seems to work fine but the only concern I have is that the size of the file dumped by PG_DUMPALL is not equal to the sum of file sizes of all the individual files dumped by the various PG_DUMPs as shown above. The difference in the size is about 23MB.

As per the postgres documentation (below) PG_DUMPALL dumps additional objects that PG_DUMP does not. Is there a way to just dump the global objects so that I can restore it along with the restoration of the individual dump files shown above.

I'd appreciate if someone can provide some insights. Thanks.

pg_dumpall is a utility for writing out ("dumping") all PostgreSQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database in a cluster. pg_dumpall also dumps global objects that are common to all databases. (pg_dump does not save these objects.) This currently includes information about database users and groups, tablespaces, and properties such as access permissions that apply to databases as a whole.
- Jay