September 23rd, 2012, 03:32 PM
-
Memory issues
Hi,
I'm using and Amazon ec2 instance with the following spec and the application that I'm running uses a postgres DB 9.1.
The app has 3 main cron jobs.
Ubuntu 12, High-Memory Extra Large Instance
17.1 GB of memory
6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
420 GB of instance storage
64-bit platform
I've changed the main default values under file postgresql.conf to:
shared_buffers = 4GB
work_mem = 4GB
wal_buffers = 16MB
checkpoint_segments = 32
effective_cache_size = 8GB
When I run the app, after an hour or two, free -m looks like below ans the crons can't run due to memory loss or similar (i'm new to postgres and db admin).
Thanks!
free -m, errors:
total used free shared buffers cached
Mem: 17079 13742 3337 0 64 11882
-/+ buffers/cache: 1796 15283
Swap: 511 0 511
total used free shared buffers cached
Mem: 17079 16833 245 0 42 14583
-/+ buffers/cache: 2207 14871
Swap: 511 0 511
**errors:
DBI connect('database=---;host=localhost','postgres',...) failed: could not fork new process for connection: Cannot allocate memory
could not fork new process for connection: Cannot allocate memory
and
execute failed: ERROR: out of memory
DETAIL: Failed on request of size 968. [for Statement "
SELECT DISTINCT....
Thank you!
September 23rd, 2012, 04:02 PM
-
A work_mem of 4GB is way, way to high.
That memory can be allocated several times by single query (and by each and every connection). Anything beyond 32MB is very uncommon.
I will not read nor answer questions where the SQL code is messy and not formatted properly using [code] tags.
http://forums.devshed.com/misc.php?do=bbcode#code
Tips on how to ask better questions:
http://tkyte.blogspot.de/2005/06/how-to-ask-questions.html
http://wiki.postgresql.org/wiki/SlowQueryQuestions
http://catb.org/esr/faqs/smart-questions.html
September 23rd, 2012, 04:18 PM
-
Originally Posted by shammat
A work_mem of 4GB is way, way to high.
That memory can be allocated several times by single query (and by each and every connection). Anything beyond 32MB is very uncommon.
----
Thank you, I'll check it out right away. I appreciate your help.
September 23rd, 2012, 05:55 PM
-
Originally Posted by shammat
A work_mem of 4GB is way, way to high.
That memory can be allocated several times by single query (and by each and every connection). Anything beyond 32MB is very uncommon.
*Hi,
I changed to 16MB, it seems to help a bit, but still I think the queries take to much memory or don't release memory- a lot of the queries stay idle with allocated memory (I think).
What else can I check?
The free field in "free -m" just goes lower and lower and doesn't rise when nothing is running, is this ok?
Thanks!
September 24th, 2012, 01:35 AM
-
You should take that question to the PostgreSQL mailing list. I have no experience with setting up Postgres on Linux
I will not read nor answer questions where the SQL code is messy and not formatted properly using [code] tags.
http://forums.devshed.com/misc.php?do=bbcode#code
Tips on how to ask better questions:
http://tkyte.blogspot.de/2005/06/how-to-ask-questions.html
http://wiki.postgresql.org/wiki/SlowQueryQuestions
http://catb.org/esr/faqs/smart-questions.html
September 24th, 2012, 01:37 AM
-
Originally Posted by shammat
You should take that question to the PostgreSQL mailing list. I have no experience with setting up Postgres on Linux
Thank you.