#1
  1. No Profile Picture
    Junior Member
    Devshed Newbie (0 - 499 posts)

    Join Date
    Aug 2003
    Location
    Michiyork
    Posts
    1
    Rep Power
    0

    Database size & performance question


    1) What are the major disadvantages and advantages when comparing Static vs. Dynamic websites/pages.

    I understand the way a database works (from the server) and the way static pages work (from the computer) and with that...

    2) Is my 1.50 MB database is getting too big?

    This is my first database and I am using ASP to access it (the database is made with MSAccess). I am pretty sure that is all normalized. There are multiple tables and each table has an ID and a filepath for an image (ex. 1998_grass).

    If you don't understand what I am trying to do here is the url (this version is still testing):
    http://www.cevl.msu.edu/ael/test/photos/work.asp

    I am only using this database for several pages in a giant website (not for the entire website itself). There are image gallery pages on the website and I am using the database so I can eliminate the need for 10 different pages all showing one category of images. I do not plan on having the entire site data-base driven.

    I am changing to the database from the multiple static HTML pages in hopes to improve performance and efficiency. But if I have a huge database file that takes up large capacity on the server then that would be defeating the purpose.

    I just need a little insight and information. I have checked all the usual resources and haven't found much on size, comparisons, or performance.

    Thank you!
  2. #2
  3. I'm Baaaaaaack!
    Devshed God 1st Plane (5500 - 5999 posts)

    Join Date
    Jul 2003
    Location
    Maryland
    Posts
    5,538
    Rep Power
    247
    Forgive me if I get longwinded...

    Performance issues associated with using databases as a backend to a web site are directly related to the compute resources available and the number of hits in a given time. For instance, if you have 10 hits an hour on a 400 MhZ machine with 128 MB RAM and an Access database, no one is likely to ever notice any lag in page retrieval (keep in mind that almost 100% of the time the actual bottleneck is the user's Internet connection). However, if you suddenly start to get 10 hits per second, that is likely to be a noticable burden on the machine. People are spoilt nowadays with their 3 GhZ machines and multiple gigabytes of RAM. There are few web sites that need that kind of horse power. If you start to burden your server with hits with the setup you have (i.e., one server with web and database), the first thing to do is make sure you are using a real database such as Sybase, Oracle, SQLServer, Postgresql, etc. and not something like Access. The next thing is to get another machine and put your database on that and have a gigabit connection betwixt your database and your web server. Next is clustering, multiple servers with load balancing, etc. but you are now in the range of the heavy hitters (not quite to google's level, but creeping up close). Realistically, more than likely most of your dynamic content can probably be turned into static content with a trigger built into the database to update the static page whenever there is some change in the underlying data (dunno if Access has triggers, but you can fake it by running the database changes through stored procedures). That can dramatically decrease the load on your server, as the fewer CGI/Active content pages you have the better. I have read stats where a given server might be able to only handle 5 hits per second with a simple CGI page and more than 50 hits per second with that same page delivered from a file. Optimization is a great thing to worry about, but first see if it matters. You can keep track of which pages are hit and how often and in my mind, until you are getting into the multiple hits per second, I wouldn't worry about any of that at all. As the hit count goes up, then modify the hardest hit pages one at a time, keep an eye on the overall CPU/IO usage on the web server/database combination and be prepared to split the database onto a separate server when your peak is reaching 80% and/or your average is around 40-50%. Hardware is relatively cheap, just throw some more processors and RAM at the server and wait for the new 10 GhZ chips that come from Dell for $499.95 and have 15 GB RAM (they are not far away, don't laugh!).

    As for the size of your database, it depends on how much RAM you have before the size of the database matters. If you have more RAM than database, the entire database will more than likely be stored in RAM. Even if you have a large database (I have worked with databases with 20 million rows before with milisecond access times and the hardware was nothing special), the indexes are likely to be stored in RAM so you will have (presuming you have the proper indexes) the minimal page lookups. If you start to see a lot of lag in database access, check what you have indexed and make sure that the queries are using the indexes properly (maybe update your statistics more often). RAM is cheap now adays, you should be running more than a gig on your development machine and the max the motherboard will handle on your production machines. Next I would worry about your hard drives. Get scuzzy RAID drives with lots of cache for faster I/O, move your logs to a separate device, make sure that your database server is not being used for something else, etc.

    Keep in mind that none of this matters if you do not have so many hits that your server is being bogged down. A 3 GhZ machine is FAST and the RAM now adays is blazing, so I would not spend a whole lot of time worrying about this unless the guy administering your servers is telling you they are getting hammered.

    BTW: Ever heard of 'SQL injection attacts'? You can make your entire network vunerable to simple hacker attacts with your database (less so, of course, with Access (please don't go into production with Access!!!???)), so take a couple of hours and do some research. Just because most people are security lazy doesn't mean you have to be.

    Did any of this help?

    My blog, The Fount of Useless Information http://sol-biotech.com/wordpress/
    Free code: http://sol-biotech.com/code/.
    Secure Programming: http://sol-biotech.com/code/SecProgFAQ.html.
    Performance Programming: http://sol-biotech.com/code/PerformanceProgramming.html.
    LinkedIn Profile: http://www.linkedin.com/in/keithoxenrider

    It is not that old programmers are any smarter or code better, it is just that they have made the same stupid mistake so many times that it is second nature to fix it.
    --Me, I just made it up

    The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.
    --George Bernard Shaw

IMN logo majestic logo threadwatch logo seochat tools logo