Subject | Re: [firebird-support] Re: IS this the right thing to do (trigger) |
---|---|
Author | Alexandre Benson Smith |
Post date | 2008-11-20T22:14:53Z |
Adam wrote:
region/SendTable, and instead of an update the trigger insert a new row
with value +1 for marking it as T and a row with -1 for marking it as False.
And summarize it daily again.
Then to get the current value use a sum(Column) for all the values.
The same approach discussed here a lot of times to improve a select
count(*) on a huge table.
It would be a lock free solution.
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br
> --- In firebird-support@yahoogroups.com, Robert martin <rob@...> wrote:It could be avoided if the summary table has more then one record per
>
>> Hi
>>
>> Overview
>> I have a large table (5 - 20 million records) that I regularly need to
>> do grouped counts on. this is slow. My planned solution is to have a
>> counting table updated by triggers. I would like to know what people
>> think of the following proposed solution. Will it work properly, what
>> sort of performance hit will be involved, is there a better way?
>>
>>
>> Detail
>> I have a large table (5 - 20 million records) with the following
>> structure...
>>
>> CREATE TABLE BranchSend
>> /* One record for each item to sent to HO, deleted once confirmed */
>> (
>> SendRef BigInt NOT NULL,
>> RegionRef BigInt NOT NULL,
>> SendTable VarChar(25), /* Table Name */
>> PkRef BigInt,
>> PkRef2 BigInt,
>> PkRef3 BigInt,
>> TransferBatchRef BigInt,
>> TransferConfirmed D_Boolean,
>> CONSTRAINT BranchSend_PK PRIMARY KEY (SendRef)
>> )^
>>
>>
>> I am regularly running a query for each regionref that gives a count of
>> records for each SendTable for that region where Transferconfirmed =
>>
> 'F'...
>
>> i.e. Group by RegionRef, SendTable
>>
>> This query can take 25 + seconds. It is being by a web service for
>>
> each
>
>> client connection (region). Therefore it is having quite a big hit on
>> the performance of the server.
>>
>> What I am thinking of is a totals table with the following structure
>>
>> CREATE TABLE BranchSendCount
>> (
>> RegionRef BigInt NOT NULL,
>> SendTable BigInt NOT NULL,
>> AvailableRecs Bigint,
>>
>> CONSTRAINT BranchSendCount_CPK PRIMARY KEY (RegionRef, SendTable)
>> )^
>>
>> And the following triggers to maintain totals....
>>
>>
>> CREATE TRIGGER BranchSend_Insert FOR BranchSend
>> BEFORE INSERT POSITION 10
>> AS BEGIN
>> IF (NEW.TransferConfirmed = 'F') THEN
>> UPDATE BranchSendCount
>> SET AvailableRecs = AvailableRecs + 1
>> WHERE RegionRef = NEW.RegionRef
>> AND SendTable = NEW.SendTable;
>> END^
>>
>> CREATE TRIGGER BranchSend_Delete FOR BranchSend
>> AFTER DELETE POSITION 10
>> AS BEGIN
>> IF (OLD.TransferConfirmed = 'F') THEN
>> UPDATE BranchSendCount
>> SET AvailableRecs = AvailableRecs - 1
>> WHERE RegionRef = NEW.RegionRef
>> AND SendTable = NEW.SendTable;
>> END^
>>
>>
>> CREATE TRIGGER BranchSend_Edit FOR BranchSend
>> AFTER EDIT POSITION 10
>> AS BEGIN
>> IF (OLD.TransferConfirmed = 'F')
>> AND (New.TransferConfirmed = 'T') THEN
>>
>> UPDATE BranchSendCount
>> SET AvailableRecs = AvailableRecs - 1
>> WHERE RegionRef = NEW.RegionRef
>> AND SendTable = NEW.SendTable;
>>
>> IF (OLD.TransferConfirmed = 'T')
>> AND (New.TransferConfirmed = 'F') THEN
>>
>> UPDATE BranchSendCount
>> SET AvailableRecs = AvailableRecs - 1
>> WHERE RegionRef = NEW.RegionRef
>> AND SendTable = NEW.SendTable;
>>
>> END^
>>
>>
>>
>> What do you guys think. Is a it a good / viable solution ?
>>
>
> It depends. Is it a problem that only one transaction can
> insert/update/delete from BranchSend at any time? If more than one
> transaction tries to insert a record into BranchSend, there will be a
> lock conflict on BranchSendCount.
>
> (and shouldn't AFTER EDIT be AFTER UPDATE? + mistake Sean already
> pointed out)
>
> Adam
>
region/SendTable, and instead of an update the trigger insert a new row
with value +1 for marking it as T and a row with -1 for marking it as False.
And summarize it daily again.
Then to get the current value use a sum(Column) for all the values.
The same approach discussed here a lot of times to improve a select
count(*) on a huge table.
It would be a lock free solution.
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br