Subject | Re: [IBO] Transactions |
---|---|
Author | Helen Borrie |
Post date | 2005-10-25T06:36:14Z |
At 02:45 PM 25/10/2005 +1300, you wrote:
processing block was not atomic, since there is no backing out once you
have committed something. Under the new architecture, you are building a
back-out trail in memory (known as an Undo log) .
Still that won't account for a difference of 7s in processing time. I
would expect to find that your "block" is trudging its way through a
dataset, possibly forcing prepares where they're not needed, and destroying
SQL objects each time the process completes.
If you really are doing this process optimally, then on your figures I'd
expect the first run-through to be slowish (say 5 secs, though I consider
this at least 5X too long) and subsequent ones to be sub-second. The
desktop-to-client-server mindshift isn't just about controlling transactions.
You might well have a GC thread competing with your process. You wouldn't
expect this on every run-through but could well expect it on the
first. That also depends on what the previous run of the process did - if
it performed a lot of updates and/or deletes then one instance of the
process is going to cop the GC.
then that will be a place to start.
Helen
>HiFirst of all, it is no longer doing what it did before. Previously your
>
>We have a BDE app that has been converted to Firebird. We have used IBO
>components with Auto commit. However We are implementing proper
>transaction handling. as in the previous email thread (Im back from
>holiday now).
>
>We have a large processing block that I have surrounded by one
>transaction as if any part fails we want to rollback and retry. However
>the process is now much slower! With autocommit and default IBO
>transaction it takes about 0.4s to process. With an explicit
>starttransaction / commit it now takes 11s !!!
processing block was not atomic, since there is no backing out once you
have committed something. Under the new architecture, you are building a
back-out trail in memory (known as an Undo log) .
Still that won't account for a difference of 7s in processing time. I
would expect to find that your "block" is trudging its way through a
dataset, possibly forcing prepares where they're not needed, and destroying
SQL objects each time the process completes.
If you really are doing this process optimally, then on your figures I'd
expect the first run-through to be slowish (say 5 secs, though I consider
this at least 5X too long) and subsequent ones to be sub-second. The
desktop-to-client-server mindshift isn't just about controlling transactions.
You might well have a GC thread competing with your process. You wouldn't
expect this on every run-through but could well expect it on the
first. That also depends on what the previous run of the process did - if
it performed a lot of updates and/or deletes then one instance of the
process is going to cop the GC.
>I have run gStat which returns the following...If you are doing multiple operations using DML from application "for" loops
>
>Database header page information:
> Flags 0
> Checksum 12345
> Generation 31654
> Page size 8192
> ODS version 10.1
> Oldest transaction 31597
> Oldest active 31598
> Oldest snapshot 31592
> Next transaction 31647
> Bumped transaction 1
> Sequence number 0
> Next attachment ID 0
> Implementation ID 16
> Shadow count 0
> Page buffers 9000
> Next header page 0
> Database dialect 3
> Creation date Oct 25, 2005 11:20:54
> Attributes force write
>
> Variable header data:
> Sweep interval: 20000
> *END*
>
>
>Which seems OK. It's OK from the POV of an advacing OIT. That's good.
>
>What am I doing wrong ?
then that will be a place to start.
Helen