#1
  1. No Profile Picture
    Registered User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Aug 2012
    Posts
    3
    Rep Power
    0

    Poor performance for Postgresql with realy big database


    Poor performance for Postgresql with realy big database

    I have worked for 6 months on GIS project with Postgresql 9.1 64 bit.
    I am totaly dissapointed on Postgresql because of poor performance (slow speed).
    I used C/C++ and libpq library.
    The database contains data for all Europe.
    I have about 200 tables and some tables have over 100 milions records.

    I did tuning for database
    (indexes for tables, indexes for fields used in queries,
    cluster comand for all database, analyse database, I changed config files for Postgres).

    I tryed to use Non-Durable Settings for Postgres because I do only read records and simple queris.

    I have read different articles on internet about tuning Postgresql.
    I have read Postgresql documentaion.
    I use database only for read only operations (read records) and simple queries.

    I can not do a simple join between two tables and get the result in a rezonable time.
    This is amazing!

    It is my boss decision to work to Postgres.

    Now I believe Postgres is not suitable for my project.

    I have a special computer with SSD 500 GB RAM-disk, 16 GB RAM,
    Windows server OS 64 bit.

    I have my application on same computer with Postgres database and no concurrent database activity.
    I am working alone on this computer.

    I have worked with Microsoft SQL server before but with a small database
    comparing with GIS database and I am sure SQL Server can not handle properly
    this database.

    Now I am thinking to use Oracle. I did not used Oracle before so I do not know
    what is performance in this situation.

    # Somebody can blame me for using Postgresql on Windows but
    # on Windows developing the project is faster than on linux.

    I do only simple queries like

    "select [fields] from table where id = value"

    For me it is very important the speed for reading records.

    I noticed that reading a table with 100 milions records is slow.
    I use cursors for reading records in my code.

    I made a simple test: I read the records ordered by primary key from biggest table which was clustered before
    I am still not happy with the speed.
    I think Postgres do not cache consecutive records.

    Can someone tell me about special settings for Postgress and Windows Server OS
    for the main scope: read very fast records.

    How can I set Postgresql to use more 2GB RAM on Windows ?

    Is libpq slow ?

    How can I read fast records ?

    I am really frustated by poor performance of Postgresql.

    I tried to use 4 dedicated connexions for specific queries in my application but the
    results are the same (slow reading records).
  2. #2
  3. No Profile Picture
    Contributing User
    Devshed Frequenter (2500 - 2999 posts)

    Join Date
    Oct 2003
    Location
    Germany
    Posts
    2,777
    Rep Power
    348
    Please read this:

    https://wiki.postgresql.org/wiki/Slow_Query_Questions

    and then supply the missing information
    I will not read nor answer questions where the SQL code is messy and not formatted properly using [code] tags.
    http://forums.devshed.com/misc.php?do=bbcode#code

    Tips on how to ask better questions:
    http://tkyte.blogspot.de/2005/06/how-to-ask-questions.html
    http://wiki.postgresql.org/wiki/SlowQueryQuestions
    http://catb.org/esr/faqs/smart-questions.html

IMN logo majestic logo threadwatch logo seochat tools logo