August 13th, 2002, 11:04 AM
Access DB is reaching limit...upgrade to what?
Our company currently distributes a database which uses Access 2000 runtime as the front-end/back-end. This database is used by single users on desktop machines to retrieve scientific data. However we are near reaching the 2GB data limit. The database is also noticeably slow (one table contains 8 million records).
My question is this: what database should we consider upgrading to? I was thinking SQL Server Desktop Engine (MSDE) and use the current Access front-end code. Unfortunately MSDE also has a 2GB limit. But could we link multiple databases together in MSDE? We are actually doing this in Access 2000 but it is quite slow. What other desktop databases would anyone recommend?
August 13th, 2002, 12:11 PM
so are there different versions of the database on each person's desktop? no central server? is this part of a product you all sell?
just off the top of my head...maybe berkeley db might be good?
August 13th, 2002, 12:45 PM
each person who buys this database has their own static copy on their computer. there is no central server customers connect to.
August 13th, 2002, 01:23 PM
I would guess that the problem you are running into is partly because standard Windows 9x desktop computers have a max file size of 2 GB. Windows 2000, XP, etc... allow for much greater file sizes.
So if you are planning to run any database system on a Windows 9x computer, you will run into this problem once any particular table exceeds 2 GB. Access combines the whole database into one file, thus the whole DB size is limited, while most SQL databases have a file for each table. There are two open source databases that have conquered this limit in different ways: MySQL, and PostgreSQL. MySQL allows you to "merge" multiple tables as one. If your application is a read-only system, MySQL can also compress tables for better performance. (see http://www.mysql.com/doc/en/Table_size.html -- ignore the fact that Windows is not in the list, the same constraints apply). PostgreSQL has virtually NO limit on table size, because it transparently uses multiple files as needed.
I have a feeling that MySQL would be the easiest for you to implement, even though PostgreSQL is a technically superior solution. The problem with PostgreSQL is that it was written primarily for Unix, and is still not too easy to get working right under windows. Work is under way for a complete windows port of PostgreSQL, though, so keep your radar on.
MySQL is free for download, and is open source, but released under the GPL, so you must decide what that means to your software release. PostgreSQL is completely free for download, completely open source, and can be compiled inside any proprietary product without worrying about any licensing issues except for providing a small statement inside the application giving credit to University of California at Berkeley. (Note: this is not the same as the Berkely DB system).
PostgreSQL is the ultimate DBMS for scientific databases, though, because it includes many mathematical functions, support for 3D geometrical calculations, custom datatypes, etc... Perhaps your company could provide some incentive to the PostgreSQL team to focus some more attention on the Windows port .
Either way, I believe you would get much better performance out of one of these DBMS's than Access itself can provide.
Among commercial DBMS's you might consider SyBase, which is fairly similar to SQL Server in price. Also, there is a company called Alphora that has a very interesting database product called Dataphor. It has extended relational capabilities beyond the standard SQL limitations. Dataphor uses the .NET framework, though, so your clients would have to download the .NET runtime.
August 13th, 2002, 02:50 PM
Look also for Firebird (which is the opensource fork of Borland's Interbase 6.0) or SAPDB from SAP Ag. (also opensource).
Anyway I agree that MySQL would probably be the easiest to implement, note that it does not support subselects (supported by Access).