Subject | Re: [ib-support] DB Design Access - Firebird |
---|---|
Author | Paul Schmidt |
Post date | 2003-04-30T14:27:58Z |
On April 30, 2003 05:19 am, Erdmann, Ralf wrote:
about 5kg. Firebird on the other hand is a 5kg bird that can lift
800kg. With Windows NT I think the file size limit is around 4GB, that is
a platform limitation, rather then a firebird one, so the question is how
do you exceed it.
There are three ways, first use a platform that allows larger files, two
use a segmented file, three change the core ideology:
Some Unix platforms (including Linux if you use the right file system)
can use 64bit file I/O, which allows for larger files, you need to check
the docs for the file system or distribution being used to see the
limits, for each file system. Of course you need to use the 64bit file
I/O version of Firebird for this to work properly.
The second method uses a segmented file architecture, in this case you
have multiple files within the same database, when you get close to the
limit, say 3.5GB then you backup the database, and restore it to 2 files,
one being around 3GB and the second being the rest. There is a limit to
the number of files, but I think it's something like 65536 files, so that
should take a while. If your getting around 1GB of data per year, then
you could simply create a 5 segment file with 2GB segments, initially
which should cover the entire size of your data.
The third method, 2GB per year is a lot of data, so you need to look at
whether all of it needs to be stored for the entire period, can the data
be summarized at some point and then only summaries stored, beyond the
initial period. Often there are two sets of requirements, what the user
needs and what the user wants, what they want, often far exceeeds what
they need. I have seen 1500 page detail reports, where the only thing
ever used was the final number on the last page.
There is a thing to note though, different engines use different internal
structures and storage methodologies, there are hundreds of ways of
storing an index to a table for example, and even more ways of storing
the information itself. What I am trying to say here, is given the same
table definition, and same data, two database engines can have radically
different storage sizes. It also depends on how much "air" the engine
puts into the data and indexes.
Paul S.
> Hello all,You have our condolences, MSAccess is an 800kg gorilla that can only lift
> I've got an older datalogging application which is based on MS Access
> and I'd like to port it to Firebird 1.02.
> Working on Windows NT 4.0 Sp6a with NTFS.
>
> The Access-DB is designed as follows:
>
> For each year there is a .mdb file with tables where logged data is
> stored (2002.mdb, 2003.mdb ...)
> and one frontend.mdb which has links to all tables in 2002.mdb,
> 2003.mdb ... actually up to 2009.mdb.
> Using seperate .mdb files for each year I could be sure to not reach
> the file size maximum.
about 5kg. Firebird on the other hand is a 5kg bird that can lift
800kg. With Windows NT I think the file size limit is around 4GB, that is
a platform limitation, rather then a firebird one, so the question is how
do you exceed it.
There are three ways, first use a platform that allows larger files, two
use a segmented file, three change the core ideology:
Some Unix platforms (including Linux if you use the right file system)
can use 64bit file I/O, which allows for larger files, you need to check
the docs for the file system or distribution being used to see the
limits, for each file system. Of course you need to use the 64bit file
I/O version of Firebird for this to work properly.
The second method uses a segmented file architecture, in this case you
have multiple files within the same database, when you get close to the
limit, say 3.5GB then you backup the database, and restore it to 2 files,
one being around 3GB and the second being the rest. There is a limit to
the number of files, but I think it's something like 65536 files, so that
should take a while. If your getting around 1GB of data per year, then
you could simply create a 5 segment file with 2GB segments, initially
which should cover the entire size of your data.
The third method, 2GB per year is a lot of data, so you need to look at
whether all of it needs to be stored for the entire period, can the data
be summarized at some point and then only summaries stored, beyond the
initial period. Often there are two sets of requirements, what the user
needs and what the user wants, what they want, often far exceeeds what
they need. I have seen 1500 page detail reports, where the only thing
ever used was the final number on the last page.
There is a thing to note though, different engines use different internal
structures and storage methodologies, there are hundreds of ways of
storing an index to a table for example, and even more ways of storing
the information itself. What I am trying to say here, is given the same
table definition, and same data, two database engines can have radically
different storage sizes. It also depends on how much "air" the engine
puts into the data and indexes.
Paul S.