September 1st, 2000, 03:54 PM
I am in the process of feeling out what I would need for a small scale distributed computing effort.
There is a code at http://www.bokler.com/eapoe.html that I'm interested in brute forcing (it is part of a contest, nothing illegal).
In order to do so, I need to keep track of what has been done, who did it, and when. That is it. The hard part is how many of these there are. There are (26^6)*2 possibilities in the best case scenario that we will have to go through (worst case scenerio is (26!*12) (that is 26 factorial) possibilities.
So let's think best case scenario first - that is 617831552 rows.
Each row would look something like this:
Only 5 cols, but 618million of them. The first col is a username, likely an e-mail address. The second col would be the date it is submitted. The 3rd would be the alphabet - it will always be some permutation of the alphabet, no more no less. The last is the score, always less than 10K.And the last col is either the letter F or the letter B. (the worst case scenerio would be the same type of rows, but there would be 4839497533519267627008000000 of them).
The clients submitting these would submit probably 10,000 at a time and that would happen maybe every half hour to hour. There will be at least 4 clients and as many as probably 50.
The machine that would hold the database is a dual PIII 667 with 20 gigs hard drive and 512M RAM running Mandrake Linux. I'm assuming the hard drive isn't enough.
So if I could get some feedback on this - I'd be very happy. Is it feasible with that system? Is it feasible with mySQL? What resources should I use for designing it? Any ideas beyond that?
September 3rd, 2000, 03:22 PM
Uhh, im pretty sure MySQL will work like crap with so many rows/columns. MySQL wasn't designed for such a big database. I would say MySQL works decently up to about 50,000 and then you should upgrade to Oracle... and 50, 000 is pushing it..
September 3rd, 2000, 05:28 PM
Hey, gus, I suggest not posting if you don't know what you are talking about. MySQL can handle tables with millions of records just fine, thank you very much. That said,
esmith, the limitation with MySQL is going to be the total size of the db. You are generally limited to 4GB (depending on OS and version of MySQL).
I agree that MySQL is probably not the best solution for what you are doing here, just not for the reasons gus gives.
September 3rd, 2000, 10:44 PM
Thanks for the responses!
The attractive thing about mySQL was that it was free.
Any chance there is a free solution that could handle something along the lines of that size?
I also think I can rule out certain sets that wouldn't be allowed, but I'm not sure I can get it down to serveral million.
Anyway, thanks for the response!
September 4th, 2000, 03:04 AM
I believe the new version of MySQL (still in beta but pretty stable) supports >4GB table sizes, even on 32-bit hardware. Check it out.
I would say that your hardware is probably underpowered to handle that kind of data, also. I would think you should have at least 2GB of RAM, and a SCSI RAID system.
I also recommend FreeBSD instead of Linux, if you want serious reliability and stability.
You might also consider PostgreSQL. It is not as fast as MySQL for a system doing mainly reads, but it seems your system would be doing a lot of mixed reads and writes, which some people claim PostgreSQL handles better. Worth a try, maybe.