Subject Re: [IBO] Re: Efficient Record Count
Author dkLists
Marco -

Thanks for the reply. I have considered a similar approach, and I think it
would be simple enough to store the count in a separate table and inc/dec
the count via stored procedure for each insert/delete.

A couple of issues. I can add records in batches of 1 -100,000, so if a
stored procedure were inc'ing/dec'ing on each insert/delete it could slow
down the process significantly, I would think. Secondly I'm not certain what
would be the best time to do a full SELECT COUNT(*) FROM TABLE INTO :CNT (or
into a field object in another table) to ensure that the count is accurate
and up to date. I would think the best time would be at the end of a batch,
but I wouldn't want the client session to wait on the stored proc to

It makes sense that if I could monitor each session insert/delete batch for
the beginning and ending of the process I could somehow count the
transaction for each inserted/deleted row, ie inc/dec a counter with a total
of inserted/deleted rows which might not slow down the batch job in a
discernible way. But I'm not certain how I would implement such a 'counter'
mechanism on the server.

I'm no expert at IB, but I'm guessing that IB looks at each insert/delete as
a separate transaction, thereby making it impossible for the server to know
when a batch insert starts/ends.

I've created a stored procedure in IB 6 with an output parameter of :CNT.
When I use an IB_StoredProc and execute the proc I have to read the result
from a field object rather than a param, as the output param isn't visible
to the IB_StoredProc. This seems to be faster than doing a fetchall, but can
still take > 10 seconds.

If there were a way for IB to monitor the beginning and ending of each batch
and act independantly of the client session, I think I could figure this

Does this approach sound doable/familiar to anyone? Having IB 'trigger' a
stored proc at the beginning or ending of a batch, starting and maintaing an
insert/delete row count during a batch, and independantly updating a field
in a separate table at the end of a batch?


David Keith
> Your topic is "efficient", so I'm thinking to a quick and dirty solution.
> I think that you have to check the table size in a fast way, but you can
spend much more time if you have to "slim" it.
> If your main table uses a generator, you can perform the following:
> the very first time, you count the record (... count(*)), even if it tokes
a long time.
> Then you store in a table the number of records plus the actual generator
> Every time you want, you can calculate the actual generator number and
compare it to the stored values (addedrec=newgen-(oldgen-oldcount)).
> Of course, this does not take care of the rows deleted by users, but it
should be a minor issue.
> If the difference is beyond the value you decide, then:
> delete old records
> recount table record number
> update the table with the new values (recordcount and generator).
> you could also perform gback and restore.. ;)
> This way is very fast for the check :)