Subject RE: Re: [firebird-support] RE: Advice on storing large numbers of documents in a Firebird database
Author
>> Steve: The only issue is that once the DB gets to a certain size it can be a pain to do a full backup/restore

> Steve, thank you for your response. Please can you let me know a bit more about the backup problems > you are having and any possible  strategies you have to mitigate them.

> Marcus: not Steve, but a little bit of knowledge i'd like to share:
> -when storing documents inside the database by using blobs, the only
> real problem is the time a backup or restore of the database needs.
> And the performance depends on several variables, such as what type of
> hard disk, raid - and what type of raid, database and backup on separate
> disks or even on separate engines, network,....
> as you can see - a lot of parameters have influence to that 'problem'.

Thank you for this. As my requirement is to have a store of documents relating to different projects, probably the best thing is to have a separate data file (.fdb) for each project. That will make the files much smaller and also I can have frequent backups for 'active' projects and less frequent backups for archive projects.

> One possible strategie would be to store metadata and only store links
> to the documents.

I would like to avoid this if possible because in a sense I would be using the Firebird database as a container.

>> For example I thought it was possible to shutdown the server and use operating system services (i.e. SCP?) to copy the file?

> That would be a save copy, but you'll never be shure if the database
> structure is fine. A possibility would be to copy the database to
> another server and perform a backup/restore cycle there.

As mentioned above, I think I will get around the problem by splitting the databases into smaller chunks.

However out of interest, would it be possible to do something like a gfix verification and possibly sweep after copying to check the integrity of the copied file or is this just as slow?


>
>
> Many thanks...Mick
>
>
>