Subject | Re: [ib-support] Re: Maximum Capacity |
---|---|
Author | Paul Schmidt |
Post date | 2003-02-27T14:28:40Z |
On February 26, 2003 04:04 pm, "Frank Emser wrote:
theoretical limits, because of other limitations that are currently present,
my current HD is 20GB, my first one was 20MB, about 15 years ago, in 15 years
we will probably be moaning that our 20TB drive is too small. However by
then we will probably be looking at 128bit file systems, and there will be a
128bit I/O version of Firebird, so the theoretical limits will be much larger.
that is most important. Making a user wait 11 days for a restore, when it
costs the company $100,000 per hour that the system is down, is generally
unacceptable. The idea was to illustrate that practical limits can be far
more limiting, then the theoretical limits.
Of course a 32TB database used for day to day operations would probably be
too slow, so it makes more sense to use a small database for day to day
operations, and then monthly run a batch update to the "big" database, and
run your summary reports after that, so maybe you don't backup the big
database every day, because your not updating it every day. This would be
much more practical.
> --- In ib-support@yahoogroups.com, Paul Schmidt <pschmidt@i...> wrote:True, the point is though that currently it's very difficult to test
> > Mathematical limits, are limits like how many rows can a table have,
>
> how many
>
> > pages, etc. These are largely theoretical in nature, but are
>
> unlikely to be
>
> > tested because the other limits are much lower.
>
> Agreed. But the other limits change much faster.
> Have you ever experienced one of those famous BIOS-limits for
> ... 512 MB-Disks, 2GB-Disks, 32 GB-Disks, 128 GB-Disk ?
>
> > 32TB seems to be reasonable
> > these days.
>
> These days indeed only few organizations are able to afford such
> amounts of disk space. But considering how fast disk space grow and
> how cheap it got over the last few years, i am curious how long "these
> days" will last.
> In another post I already mentioned that my first harddisk has had
> only 1/1000th, no correction: 1/10000th of the capacity of a nowadays
> cheap&dirty harddisk. And it was _much_ more expensive !
theoretical limits, because of other limitations that are currently present,
my current HD is 20GB, my first one was 20MB, about 15 years ago, in 15 years
we will probably be moaning that our 20TB drive is too small. However by
then we will probably be looking at 128bit file systems, and there will be a
128bit I/O version of Firebird, so the theoretical limits will be much larger.
> > Then there are practical limits, like how long does it take for Gbak toTrue, but backup times are a small part of the issue, it's really the restore
> > backup a certain sized database, for example if it takes 8 hours per
>
> TB, then
>
> > a 32TB backup would be almost 11 days (ouch), if it's a really
>
> important
>
> > database then 11 days to backup would not be reasonable, and the
>
> down-time
>
> > for a restore would not be reasonable either. In this case 1-3 TB
>
> would be a
>
> > more reasonable limit.
>
> Restricting yourself on gbak, you are perfectly right.
> But I would like to stress the fact that gbak really isn't the final
> wisdom !
>
> Indeed, with really existing firebird-databases having a size of 980
> GB around, probably "someone" (who is someone?) should simply start to
> think about more clever methods to backup huge databases:
> 1.) Obvious solution: Incremental backups come to my mind.
> Why should I have to backup every time a 980 GB database if only about
> for example 10 % of it (or even less) has changed ?
> 2.) The backup-process could probably record the changes done to the
> database whilst it was backed-up as well. This would for example mean
> that after completion of an 8 hours backup-run, you have a backup of
> the actual state of the database, not an already 8-hours old one.
that is most important. Making a user wait 11 days for a restore, when it
costs the company $100,000 per hour that the system is down, is generally
unacceptable. The idea was to illustrate that practical limits can be far
more limiting, then the theoretical limits.
Of course a 32TB database used for day to day operations would probably be
too slow, so it makes more sense to use a small database for day to day
operations, and then monthly run a batch update to the "big" database, and
run your summary reports after that, so maybe you don't backup the big
database every day, because your not updating it every day. This would be
much more practical.