Subject Re: [firebird-support] Re: script for nbackup
Author Ann Harrison
David,

On Thu, Jun 2, 2011 at 3:04 AM, davidwilder_13 <davidwilder13@...> wrote:
> Hi Ann,
>
> 1. Create test.fdb, create table and insert 2 records
> 2. Lock database
> #./nbackup -U BASX -P **** -L ../data/test.fdb
> 3 Copy database to temp location
> #cp ../data/test.fdb ../data/temp/test.fdb
> 4. inserted a few more records into the original database which created a merge file
> 5. Set state of backup database to normal with nbackup
> #./nbackup -U BASX -P **** -F ../data/temp/test.fdb
> 6. Backup and restore
> #./gbak -b -t -v -user BASX -pas **** "../data/temp/test.fdb" "../data/temp/test.gbk"
> #./gbak -c -p 16384 -v -user BASX -pas **** ../data/temp/test.gbk ../data/temp/testnew.gdb
> 7. Then Shutdown service and swap the databases
> #/etc/intit.d/firebird stop
> #mv ../data/temp/testnew.gdb ../data/test.fdb
> #/etc/intit.d/firebird start
> 8. Then merge changes back in with nbackup
> #./nbackup -U BASX -P **** -N ../data/test.fdb
> After this we found the merge file was deleted and we had all 6 records in the new database.


My guess is that because you started with a tiny new database,
everything you did
went to the same pages on the gbak restore, so the nbackup page-level
replacement
didn't break things. You might try the same tests on a database that's
been used for
a couple of weeks and I suspect you'll be less lucky.

> We were yet to run some more thorough tests but if nbackup is not intended for this purpose I guess we won't bother now.

Mixing logical and physical backup/recovery methods doesn't strike me as a good
idea.
>
> The reason for this requirement is that our customers have requested a change to our service agreement and wish to switch to 24/7 availability.
> This is fine for most databases with the exception of several 700GB+ systems. These databases process over 200,000 transactions a day and we have found that from time to time we get a performance decrease which we cannot fix with a sweep. This performance drop means our daily ETL process that usually imports records at 400-600 recs/sec drops to less than 100.

> Our current solution is to bring the system off-line on weekends and backup/restore which takes 30-40+hours to complete and fixes the performance problem every time.

OK. A gbak backup and restore fixes a number of things, but it's
certainly a blunt
instrument, and inconsistent with 24/7 operation.
> Do you know of any methods caching changes for the purpose of an online (or few minutes outage max) backup restore?
>
> Or perhaps any ideas about how we find and fix the root cause of our performance decrease?

Well, there's always IBPhoenix, or you can start looking at the slow
databases with
gstat and other tools to see how they differ from a newly restored
database. Lots
of people have offered tuning advice on this list over time. But the
first step is to
figure out what's wrong.

Good luck,

Ann