And if a single file is larger than 2GB?
This is quite a nasty issue. It can be solved but only by parsing a list of the file contents and adding up file sizes - which involves a fair amount of processing and potentially disk space on the target server.
I have done something similar (at a higher level), load balancing backups across X number of backup streams to ensure each was around the same size. The way I'd do this, without putting too much thought into it, would be to use the output of a du command (specifying the option to return information at the file level) and building up a set of files containing the names of the files to be included for each archive. If du fails to give the full and/or relative path name you'll need to something with, probably, the find command.
The moon on the one hand, the dawn on the other:
The moon is my sister, the dawn is my brother.
The moon on my left and the dawn on my right.
My brother, good morning: my sister, good night.
-- Hilaire Belloc