Subject | Re: [Firebird-devel] New backup technology |
---|---|
Author | Ann Harrison |
Post date | 2003-07-09T15:51:26Z |
Hi Nickolay!
I'm copying this to Architecture because Jim doesn't follow this
list and I'd be interested to hear what he has to say - well actually,
he's about six feet from me so I could just ask, but then the rest of
you wouldn't have the benefit of his advice.
Nickolay originally suggested:
seemed like such a great idea, but I shouldn't ignore the possibility
that some of their ideas aren't so bad. Of course, you could backup the
database dump, but that supposes that you've got enough disk for at
least double your database size. Dumping directly to some other medium
does have a space advantage.
its page number.
start writing changes to a difference file - that file has some sort of
pointer mechanism so if you're looking for page 15, you go to pointer 15
and if the page is in the difference file, you'll find its offset. When
the difference file is in use, every transaction must check the
differences first, then the regular database. Right? When the copy is
done, you unlock the database and users make their changes directly to
it again, though they need to continue to check the difference file.
You start applying the pages from the difference file to the database.
Before applying a page from the file to the database, you first check
the generation number. If the database page has a higher generation
number, then you skip writing from the difference file and drop that
changed page on the floor. When you've emptied the difference file
everything goes back to normal.
shadow type backup, you log new changes to pages that have already been
copied, not the old state - that's already there and wrong. You end up
with a copy that's current as of the time it finished, not when it started.
Regards,
Ann
I'm copying this to Architecture because Jim doesn't follow this
list and I'd be interested to hear what he has to say - well actually,
he's about six feet from me so I could just ask, but then the rest of
you wouldn't have the benefit of his advice.
Nickolay originally suggested:
>>>The idea of new backup engine is that main file is locked for changesI replied:
>>>during backup process and tools may access the file at hardware read
>>>speed. Changes are placed in separate difference file and then merged
>>>with main file after backup.
>>>
>>>
>>>
>>Have you looked at the code that does the backup to start a shadow set?OK, I suppose. Taking system management advice from Oracle has never
>>... The disadvantage is that it requires that the database -
>>or something that understands how to integrate changes - do the actual copy.
>>
>>
>
>And this is the real problem. I can explain.
>1) Oracle and several other industrial databases allow locking of main
>database files to allow OS-level backup tools to work on them.
>People are familiar with this method.
>
seemed like such a great idea, but I shouldn't ignore the possibility
that some of their ideas aren't so bad. Of course, you could backup the
database dump, but that supposes that you've got enough disk for at
least double your database size. Dumping directly to some other medium
does have a space advantage.
>2) I've seen several examples when backup device lacked Oracle MediaOK.
>Manager support. Those examples include SAN devices that have
>hardware-level incremental backup capabilities and several very modern
>robotic libraries. "ALTER TABLESPACE BEGIN/END BACKUP" Oracle SQL statement
>was used to lock files/devices for the actual backup. And we are unlikely
>to get Oracle-level of support from backup solution vendors. This is why
>OS-level tools working with database files directly is going to be our main
>way to work with existing backup solutions.
>
>I can add some details about how my backup solution works:Good.
>1. Database is online at any point of backup process
>
>2. Each page has SCN (System Change Number) that is used to doEach page already has a generation number on it, though it doesn't have
>incremental backups and during merge of changes from difference file
>(writing to main database file is allowed during merge)
>
its page number.
>3. Difference file consists of pointer pages located at regularLet me see if I understand how this works. You lock the database and
>intervals and redirected pages in (almost) random order. Page allocation
>tables are cached in memory for each CS process (this is how it works
>now, v.2.0 is planned to have new shared memory infrastructure so this
>may be improved in non-clustered case).
>
start writing changes to a difference file - that file has some sort of
pointer mechanism so if you're looking for page 15, you go to pointer 15
and if the page is in the difference file, you'll find its offset. When
the difference file is in use, every transaction must check the
differences first, then the regular database. Right? When the copy is
done, you unlock the database and users make their changes directly to
it again, though they need to continue to check the difference file.
You start applying the pages from the difference file to the database.
Before applying a page from the file to the database, you first check
the generation number. If the database page has a higher generation
number, then you skip writing from the difference file and drop that
changed page on the floor. When you've emptied the difference file
everything goes back to normal.
>4. Because of ill-ordered difference file structure performanceOK.
>degradation for its maintenance will be significant if its core doesn't
>fit into hardware-level, OS-level or Firebird-level cache (latter should
>not be used in FB 1.5 and below because it is very much slower then the
>former ones). But if you analyse statistics for large database usage
>patterns you'll notice that amount of changes done at backup period usually
>easily fits even into storage controller hardware cache (that is usually much
>smaller than OS-level cache). Otherwise it means your system is misconfigured.
>
>BTW, I can implement both approaches - locking main file for changesApparently I was either unclear or completely dunderheaded. For the
>to exploit OS-level tools and logging old pages into difference file
>when NBACKUP is used. This doesn't change the implementation significantly.
>NBACKUP needs a kind of Media Manager anyway.
>
shadow type backup, you log new changes to pages that have already been
copied, not the old state - that's already there and wrong. You end up
with a copy that's current as of the time it finished, not when it started.
Regards,
Ann