January 24th, 2013, 12:33 PM
I'm looking for a solution to a problem and I think I have one in mind, but I'm not entirely sure it's the correct fit. I've googled around, and while I see that it's completely doable, I haven't found anything about people who have done it this way.
I have a database, well a table, but could be split into it's own database. It doesn't reference any other tables, the data is aged out at 24 hours and there is no permanent storage required for the data because the data is aggregated into a permanent storage location at a set interval.
As you can imagine, the IO for this database is intense. My plan was to create a tmpfs filesystem and move the aged out database to there using tablespaces. My solution to the intense IO is that Sold State Disks are being used, but SSDs have an issue with life expectancy with a lot of data being written to them.
I was curious whether or not I had to move the table out to it's own database, only because if the system goes down for whatever reason, the data will no longer exist since tmpfs is purely in-memory file system.
If the table was in it's own database, the database can share the tablespace with the table so that when the system comes up I can have a script execute before starting the Postgres daemon process which recreates the database and tables.