|Subject||Re: Batch statements|
> - or better, asOk. So let's fix this issue.
> Jim pointed out, the actual problem which is that every byte moving
> between client and server is copied at least eight times.
> Adding newWhat is the strategy for the following code?
> bulk statement that will further complicate error recovery on the
> client side (ok, there was a uniqueness violation .... in which of
> the 37 records?)
EXECUTE BLOCK AS BEGIN
INSERT INTO some_table VALUES ('a');
INSERT INTO some_table VALUES ('b');
INSERT INTO some_table VALUES ('z');
There is no recovery, as I understand. If an error happens within this
block, all changes are rolled back to the internal savepoint.
So, at least one solution we have - allow complete batch to be
executed as if it were single statement.
Better solution would be (as it is specified in JDBC) to return an
insert/update/delete count for each executed statement. This should be
returned in any case, whether or not, an error happened. Those
statements that did not execute counts would contain -1. Client
recovery is pretty straightforward - it knows which statement was
executed and which not by checking the counters. Client can easily
re-submit those statements that failed or do something different. If
we go this way, the execution cannot be atomic any longer.
Does it solve the overhead TCP stack? I think yes. The XSQLDA
structures are passed unmodified to the engine. However, this does not
solve the problem, if each byte is copied 8 times, since Firebird
would copy the same amount of data.
Any idea how to find the real bottleneck?