Subject | Re: [firebird-support] How to crash FB |
---|---|
Author | Helen Borrie |
Post date | 2004-10-12T00:24:10Z |
At 02:38 PM 11/10/2004 -0400, you wrote:
structure that it uses for transaction accounting. You don't get I/O
errors for memory structures.
grouping or a sort and can't find its temporary sort space; or there is not
enough temporary sort space available for the sort operation. It's
probably unrelated to your other problem; but it could be that the
firebird.log has blown out the available space on the partition where it
lives.
exception from the server which, eventually, crashed something. Can't tell
from your info whether it was the application that crashed, or the
server. The firebird.log would give you more information about that.
The unhandled exception is possibly gdscode 335544663, Too many concurrent
executions of the same request. 10,000 updates in a single transaction
wouldn't be a problem, but 10,000 clones of the same SELECT request quite
well could be. I don't even like to think how complicated that could get if
some of those selects are re-selecting the same rows that already have
SELECT clones and update requests pending for them...can't tell from your
description whether that possibility exists.
As Daniel Rail mentioned, there is some top limit for savepoints; though
I'm not aware of any outstanding problem regarding them.
./heLen
>FB 1.5.1.4481Why do you need to select the record first?
>
>I have a routine which does:
>
>Select single record by PK from single table.
>Issue a new update on same record to same table.
>
>This happens in a transaction and is repeated for each record. Normally a
>couple dozen times. The complete set is done in a trasnaction, not each one.
>Because of a flaw we had a situation that instead of a a few dozen times itFB doesn't keep a transaction log. What it does do, is it keeps a memory
>ran 10,000. Now we fixed this, but the bigger problem is FB just dies. It
>appears that when a transaction log is too big it just dies??
structure that it uses for transaction accounting. You don't get I/O
errors for memory structures.
>We'veThis error comes from your interface (dotnetprovider?)
>reproduced this on our test server and two developer machines.
>
>The erros are below. When I look in Guardian, it says server terminated
>abnormally or something like that.
>
>Sometimes we get this:
>Exception Details: FirebirdSql.Data.Firebird.FbException: I/O error during
>"CreateFile (open)" operation for file
>"D:\IISDomains\purchase.atozed.com\Data\APPRinok.gdb" Error while tryingThis is gdscode 335544734. It can occur when the server is asked to do a
>to open file
grouping or a sort and can't find its temporary sort space; or there is not
enough temporary sort space available for the sort operation. It's
probably unrelated to your other problem; but it could be that the
firebird.log has blown out the available space on the partition where it
lives.
>But normally we get this:Something is going on in your interface in response to an unhandled
> Error reading data from the connection.
>Description: An unhandled exception occurred during the execution of the
>current web request. Please review the stack trace for more information
>about the error and where it originated in the code.
>
>Exception Details: FirebirdSql.Data.Firebird.FbException: Error reading data
>from the connection.
>
>Source Error:
>
>An unhandled exception was generated during the execution of the current web
>request. Information regarding the origin and location of the exception can
>be identified using the exception stack trace below.
>
>Stack Trace:
>
>[FbException: Error reading data from the connection.
>]
> FirebirdSql.Data.Firebird.FbTransaction.Rollback() +121
> ADOPlus.DBConnection.RollbackTrans() +13
exception from the server which, eventually, crashed something. Can't tell
from your info whether it was the application that crashed, or the
server. The firebird.log would give you more information about that.
The unhandled exception is possibly gdscode 335544663, Too many concurrent
executions of the same request. 10,000 updates in a single transaction
wouldn't be a problem, but 10,000 clones of the same SELECT request quite
well could be. I don't even like to think how complicated that could get if
some of those selects are re-selecting the same rows that already have
SELECT clones and update requests pending for them...can't tell from your
description whether that possibility exists.
As Daniel Rail mentioned, there is some top limit for savepoints; though
I'm not aware of any outstanding problem regarding them.
./heLen