Subject | Re: [firebird-support] Issue with large table in FB 1.5 and 2.5 |
---|---|
Author | Helen Borrie |
Post date | 2019-05-29T20:15:02Z |
[..]
that's exactly how coop GC works. In the daily ebb and flow of your
production tables, it is barely noticeable but GC on those bulk
deletes will hit the next transaction that reads from your offending
table.
BTW, background GC (memntioned by Mark) is not available in Classic or
Superclassic, so configuring for this won't do anything.
if you are restoring the database anyway, you don't need to have the
backup doing GC - use the -[no_]g[arbage_collect] switch. On Linux
you would also need to make sure that subsequent connections are made
to the restored DB, not continuing to be done on the old one.
deleting those records and thus creating a large number of old record
versions.
Incidentally, if this table is the only one you have that is building
up that level of old record versions, setting the sweep interval to 0
is the right thing to do, regardless.
Then, of course, you must ensure that backups (without the -g switch)
and/or sweeps are done often enough to keep the "gap" in a comfortable
state. Also, you don't need to do *both* a backup (sans -g) and a
sweep, since the backup achieves the same thing as sweep at the end of
it all. Of course, it is moot as to whether sweeping before the
backup, thus taking the GC out of gbak's hands, would gain you
anything in terms of the time this housekeeping would take.
Helen
---
This email has been checked for viruses by AVG.
https://www.avg.com
> My code that loads the data commits the transactions after theNo, it is not sweeping, it is cooperative garbage collection and
> DELETE statement, and then at each 5,000 rows loaded. I have
> checked this by watching the load and using IBExpert to count
> records, so I can confirm that the commits are happening.
> After the loading has completed, what then occurs is strange. Any
> attempt to interact with that table (and only that table) forces
> some form of sweep or some delay that shows up as a process on the
> server running at near 100% CPU, for about 4-5 hours. Then it calms
> down and access to the table is pretty much instantaneous. I
> thought that this might be a sweep that is occurring after the load,
> so I set the database to not auto sweep.
that's exactly how coop GC works. In the daily ebb and flow of your
production tables, it is barely noticeable but GC on those bulk
deletes will hit the next transaction that reads from your offending
table.
BTW, background GC (memntioned by Mark) is not available in Classic or
Superclassic, so configuring for this won't do anything.
> I have force dropped the table, and rebuilt it.That is your solution - but *do not delete those records*.
> I have done a backup with sweep and a restore to refresh the database.That's extreme unless you do a daily backup-and-restore anyway...but
if you are restoring the database anyway, you don't need to have the
backup doing GC - use the -[no_]g[arbage_collect] switch. On Linux
you would also need to make sure that subsequent connections are made
to the restored DB, not continuing to be done on the old one.
> Nothing seems to make any difference.No, none of those things would make any difference as long as you are
deleting those records and thus creating a large number of old record
versions.
Incidentally, if this table is the only one you have that is building
up that level of old record versions, setting the sweep interval to 0
is the right thing to do, regardless.
Then, of course, you must ensure that backups (without the -g switch)
and/or sweeps are done often enough to keep the "gap" in a comfortable
state. Also, you don't need to do *both* a backup (sans -g) and a
sweep, since the backup achieves the same thing as sweep at the end of
it all. Of course, it is moot as to whether sweeping before the
backup, thus taking the GC out of gbak's hands, would gain you
anything in terms of the time this housekeeping would take.
Helen
---
This email has been checked for viruses by AVG.
https://www.avg.com