Subject | Re: GBak TPB |
---|---|
Author | m2data |
Post date | 2008-04-24T12:26:54Z |
--- In firebird-support@yahoogroups.com, Helen Borrie <helebor@...>
wrote:
testet agains FB2.x
two-phase transactions, of course. The down-side is that your
application code is totally responsible for maintaining the
referential integrity between the records in the master database and
those in the image database. It is doable, however.
images total size is in the range of 4GB-25GB. I cannot transfer that
much data every night.
My plan is:
1) Use gbak -M to get metadata in to at file.
2) Export every table to seperate files
3) Export the blobdata from the table containing the images to
seperate imagefiles but ONLY if the imagefile dosen't exists
allready. Meaning that with 8000 record in the image table I get 8000
files.
4) Transfer all changed files (since last transfer) to remote server.
That way I only transfer added images since last remote backup.
I would love to have a gbak parameter allowing my to skip some
tables. I could use that instead of 1)+2).
Brian
wrote:
>wait....
> At 05:59 AM 24/04/2008, you wrote:
> >Hi
> >
> >What TPB (transaction parameter buffer) is used by GBak?
>
> Isolation concurrency (snapshot), mode read-only, lock resolution
>Thanks
>except
> >I need to make my own small version of gbak to export all tables
> >for one, which gets exported in the form of one record = one fileany object that you don't want to back up.
> >(images in a blob).
>
> Look at DBak - http://www.telesiscomputing.com You can filter out
>I known of DBak, but its last update is back in 2005 and has not been
testet agains FB2.x
>but
> >I need this for incremental remote backup. The images are never
> >modified, so it is not good to transfer them every night.
>
> Gbak doesn't (and can't) do incremental backup.
>
> >One way to solve it, is to store the images outside the database,
> >since we don't allow shared folders, that is not an option.presupposes that you are using a data access interface that supports
>
> Another way is store the images in a separate database. This
two-phase transactions, of course. The down-side is that your
application code is totally responsible for maintaining the
referential integrity between the records in the master database and
those in the image database. It is doable, however.
>Storing the images in a seperate database won't solve my problem. The
images total size is in the range of 4GB-25GB. I cannot transfer that
much data every night.
My plan is:
1) Use gbak -M to get metadata in to at file.
2) Export every table to seperate files
3) Export the blobdata from the table containing the images to
seperate imagefiles but ONLY if the imagefile dosen't exists
allready. Meaning that with 8000 record in the image table I get 8000
files.
4) Transfer all changed files (since last transfer) to remote server.
That way I only transfer added images since last remote backup.
I would love to have a gbak parameter allowing my to skip some
tables. I could use that instead of 1)+2).
Brian