Subject | AW: [firebird-support] NBackup Levels |
---|---|
Author | Steffen Heil (Mailinglisten) |
Post date | 2014-05-07T06:42:16Z |
Hi
Level 0 at the first Sunday of every quarter.
Level 1 at the first Sunday every month.
Level 2 at every Sunday.
Level 3 every day.
Level 4 every hour.
But for this project I want more than hourly consistency. I am targeting 5 minuites or even less. That could be done using:
Level 5 every 5 minutes.
However in this case there might be lots of days with nearly no difference and then there may be some days with gigabytes of changes.
Using an approach as above would mean to copy all these changes up to 23 times...
I would really like to prevent that kind of extra traffic AND more important that delay in synchronization.
Moreover, the servers hard drives may be rather slow and the database may grow up to 200 GB.
(During operation there are relatively few reads and only some writes, the database is idle 99% of the time, so for operation the slow
io system is not a problem.)
Regards,
Steffen
[Non-text portions of this message have been removed]
> > I could simply create backups with incrementing levels, move theFor another type of service I already have a backup script that creates 5 levels of backups:
> > backup to the other server(s) and apply them there (that database is
> > offline).
> > However, I suspect there is a limit for the number of backup levels a
> > database can have.
> You don't need a lot of backup levels to do what you want. For example, Do a level 0 backup once a month/year/whatever, then do a
> level 1 backup every weekend, a level 2 backup every day, and if required a level 3 backup every hour/whatever.
Level 0 at the first Sunday of every quarter.
Level 1 at the first Sunday every month.
Level 2 at every Sunday.
Level 3 every day.
Level 4 every hour.
But for this project I want more than hourly consistency. I am targeting 5 minuites or even less. That could be done using:
Level 5 every 5 minutes.
However in this case there might be lots of days with nearly no difference and then there may be some days with gigabytes of changes.
Using an approach as above would mean to copy all these changes up to 23 times...
I would really like to prevent that kind of extra traffic AND more important that delay in synchronization.
Moreover, the servers hard drives may be rather slow and the database may grow up to 200 GB.
(During operation there are relatively few reads and only some writes, the database is idle 99% of the time, so for operation the slow
io system is not a problem.)
Regards,
Steffen
[Non-text portions of this message have been removed]