Subject | commit transaction overhead |
---|---|
Author | Tomas |
Post date | 2008-07-29T07:24:02Z |
Hi,
I have some behavior I am not understand now. We have object oriented
environment which is mapped into FB database. When some bigger document
is saving, then external transaction is started, inside it is lot of
select / insert commands and then transaction is commited.
While this transaction is in progress and new object is created, then is
necessary to give new identificator to this object. This is done by our
own generator(I am meaning our piece of code, not firebird object). On
the very first moment generator will read from special table last used
value, increment it by 36 and then this new value will write back to the
table. When all 36 numbers are gone, it will repeat steps increase
counter by 36 and update row in special table. This is done by use
external transaction in separate database connection - so start
transaction, update 1 row in our counter table and commit.
When this transaction around counter commits, it takes in most of cases
bigger time amount - 3 -4 seconds on my computer. In other cases or when
I created simulation only to generate numbers, it was always quick -
took only miliseconds. Special table for generator is not involved / use
in first transaction around saving bigger document.
Can anyone give me sugestion about transaction overhead ? Seems to me
like when transaction is commiting, it involve to proceed some
maintenance things caused by other transaction.
Aswell one extra point - separate database connection used for counter
is cached. When it should be used again, then on begin is extra check
code StartTransaction immediatly followed by Commit.
And bigger time duration I got aswell on this "empty" commit. When I
disabled this check, then time duration moved to other commits.
thank, tomas
I have some behavior I am not understand now. We have object oriented
environment which is mapped into FB database. When some bigger document
is saving, then external transaction is started, inside it is lot of
select / insert commands and then transaction is commited.
While this transaction is in progress and new object is created, then is
necessary to give new identificator to this object. This is done by our
own generator(I am meaning our piece of code, not firebird object). On
the very first moment generator will read from special table last used
value, increment it by 36 and then this new value will write back to the
table. When all 36 numbers are gone, it will repeat steps increase
counter by 36 and update row in special table. This is done by use
external transaction in separate database connection - so start
transaction, update 1 row in our counter table and commit.
When this transaction around counter commits, it takes in most of cases
bigger time amount - 3 -4 seconds on my computer. In other cases or when
I created simulation only to generate numbers, it was always quick -
took only miliseconds. Special table for generator is not involved / use
in first transaction around saving bigger document.
Can anyone give me sugestion about transaction overhead ? Seems to me
like when transaction is commiting, it involve to proceed some
maintenance things caused by other transaction.
Aswell one extra point - separate database connection used for counter
is cached. When it should be used again, then on begin is extra check
code StartTransaction immediatly followed by Commit.
And bigger time duration I got aswell on this "empty" commit. When I
disabled this check, then time duration moved to other commits.
thank, tomas