Subject | Re: [firebird-support] Strange behavior on very large table |
---|---|
Author | Helen Borrie |
Post date | 2019-02-22T03:59:49Z |
Myles Wakeham wrote:
are deleting rows from it regularly. Do you happen to be deleting
900,000 rows each day before you load up the latest batch of 900,000?
With Classic, you have cooperative garbage collection, which means the
first transaction that selects on that table following the completion
of the bulk delete will get hit with GC of all those old back
versions.
tail is wagging the dog. You don't exactly say how this table ebbs
and flows but bulk inserts of course will cause the indexes to be
updated, along with clearing out the junk in the indexes that was
created by deletes. Indexes are not recreated; nor are they created
until the engine is instructed to do so. Indexes are *rebuilt* when a
database is restored from backup and also by
ALTER INDEX <name> INACTIVE
followed by
ALTER INDEX <nme> ACTIVE
which you are probably doing regularly if it's true that this table is
constantly being subjected to bulk deletes and inserts.
The engine doesn't otherwise mess with indexes.
If you're not doing any particular housework on it (restoring from
backup and/or resetting the indexes periodically), then it would be
normal to expect degrading performance until the next time that
housekeeping is done.
worse if you have more than a handful of concurrent users. The sort
files on a big table are going to go to disk regardless, one way or
another. The engine can't split a sort file so that part of it is in
RAM and part in TempDirectories. The operating system might decide to
page out some RAM to disk - I seem to recall that's possible on some
systems with older Fb versions but I can't swear to it.
Rebuilding the indexes on that table would be an easy thing to try
during some time when the table isn't in use. It would be worth
considering how long it has been since the last backup-and-restore
cycle, too.
Cheers,
Helen
---
This email has been checked for viruses by AVG.
https://www.avg.com
> The issue is with one very large table that contains about 900,000Given that this table is "temporary" storage, one supposes that you
> rows. This table is used as a temporary stora ge of data that is
> loaded every 24 hours from a CSV file, via an external program. The
> loading takes about an hour to run, but works reliably. We are not
> seeing this process changing in terms of time when it is run.
> What we are seeing is that for a period of about 3-4 hours after
> the morning data load is completed (about 4AM our time), any attempt
> to use that table seems to be triggering some very long loading or
> reindexing process.
are deleting rows from it regularly. Do you happen to be deleting
900,000 rows each day before you load up the latest batch of 900,000?
With Classic, you have cooperative garbage collection, which means the
first transaction that selects on that table following the completion
of the bulk delete will get hit with GC of all those old back
versions.
> This appears on doing certain queries thatIt would be hard to say whether the dog is wagging its tail or the
> involve indexes, or to re-create an index.
tail is wagging the dog. You don't exactly say how this table ebbs
and flows but bulk inserts of course will cause the indexes to be
updated, along with clearing out the junk in the indexes that was
created by deletes. Indexes are not recreated; nor are they created
until the engine is instructed to do so. Indexes are *rebuilt* when a
database is restored from backup and also by
ALTER INDEX <name> INACTIVE
followed by
ALTER INDEX <nme> ACTIVE
which you are probably doing regularly if it's true that this table is
constantly being subjected to bulk deletes and inserts.
The engine doesn't otherwise mess with indexes.
If you're not doing any particular housework on it (restoring from
backup and/or resetting the indexes periodically), then it would be
normal to expect degrading performance until the next time that
housekeeping is done.
> It seems to show someOn Classic, there's no point in this. If anything, it will make it
> sort of caching between RAM and disk going on. I have set the sort
> size in firebird.conf from the default of 500mb to about 5GB to
> handle this, but that does not seem to have made a difference.
worse if you have more than a handful of concurrent users. The sort
files on a big table are going to go to disk regardless, one way or
another. The engine can't split a sort file so that part of it is in
RAM and part in TempDirectories. The operating system might decide to
page out some RAM to disk - I seem to recall that's possible on some
systems with older Fb versions but I can't swear to it.
> All other tables are working just fine. It is only this one tableThere's nothing like that and 900K rows is not an excessive size.
> and this problem only has appeared in the past 3 weeks or so.&
> nbsp; Prior to that, it ran without issue. This suggests to me that
> it has something to do with us reaching some critical volume (as the
> row count has increased at about 20% per year and continues to).
Rebuilding the indexes on that table would be an easy thing to try
during some time when the table isn't in use. It would be worth
considering how long it has been since the last backup-and-restore
cycle, too.
Cheers,
Helen
---
This email has been checked for viruses by AVG.
https://www.avg.com