Subject Re: Slow inserts after 70 million records
Author derryck.welas
maybe confusing, but when i say "maintained in RAM" , i mean the cache of the linux OS (which is in RAM).

--- In firebird-support@yahoogroups.com, "derryck.welas" <derryckw@...> wrote:
>
> Hi All,
>
> I have a log table that records all updates/inserts/deletes
> from several tables. (using triggers)
>
> It does so by registering :
> SEQ_NR Integer,
> TABLE_NAME Varchar(100),
> TRANTYPE Varchar(1),
> UPDDATE Timestamp,
> OBJECT_SEQ_NR Varchar(100),
> REPL_STATUS Varchar(1),
> ERROR_CODE Varchar(100),
> ERROR_MSG Varchar(255),
>
> normally this works fine (up to ~ 70 million records) when suddenly it started to take a lot of time (from 20 ms to 900ms maybe higher)
>
> My theory ,the main reason why this got slow has to do with the total index size of this table that must be maintained in RAM.
>
> There are indexes on seq_nr ,table_name, object_seq_nr this give me at least 260 index bytes per record (according to index calc. Ivan Prenosil ).
>
> This totals up to (260*70.000.000)/1024/1000= 17773 MB of RAM required (could be less because TABLE_NAME is not unique).
>
> Question: Since the machine has only 16000 MB RAM. Could this be the bottleneck i hit on ?
> (after a backup/restore it seems this problem disappears but i fear it will return again after being in production for a while)
>
> (this is FB2.5/classic running on linux 64 bits)
>
>
> Regards,
> Derryck
>