Subject Re: [IBO] support for 2 transactions in ib_query
Author Mihai Chezan
Thank you for taking the time to answer to my post. My question from the
first post was answered.

Comments on your post:

> However, it does separate the DML context from the
> user's view.
That's not what I want. I want the view to reflect DML that is done behind
the scenes without any coding. So I want the view to know when there are
modifications to the view and to reflect them but without making a full

> No. One IB_Query cannot be in two transactions. That isn't an IBO
> limitation: it's a database rule.
I think you are referring to logical transactions and I agree with you. But
I was referring to physical transactions.

> What FIBPlus does, it starts a NEW transaction when it performs each DML
> operation. With IBO, if you want to follow this model, use a TIB_DSQL in a
> separate transaction for your DML operations. If you want to be able to do
> it in one shot, by default, without considering the consequences for
> contention and integrity, then use FIBPlus.
Yes. I would like to do it in one shot.
I don't understand what "consequences for contention and integrity" are you
referring to.
As long as you can control (start/commit/rollback) the transaction in which
the DML takes place I see no problems.

> Wrong, IBO has much better control of cursors than other
> implementations. What is does, it invalidates *only that row*. The
> current state of the dataset buffers is just as it was before the commit
> (unless you set it to be something different...) Next time your app needs
> to access that row, it knows to re-fetch that row, even if it is currently
> in the buffer.
I'm not so sure about "it invalidates *only that row*" thing.
I've done a simple example:
we have tb_test that has 500 rows.
we have ib_query=select * from tb_test that is linked to an ib_transaction
and has CommitAction set to caRefresh and we show the result to an ib_grid.
we have a button with OnButtonClick=ib_transaction.Commit
we then scroll to record 400 and we click on the button.
if you put an IB_MonitorDialog and check the Statement and Row checkboxes
you will see that ibo invalidates all rows and not *only that row*. So
commit invalidates all rows from the buffer. I don't see how this could be
otherwise as long as firebird doesn't support to open a cursor and to say
go to record 400. I know that in FB1.03 you can do SELECT [FIRST (<integer
expr m>)] [SKIP (<integer expr n>)] but I don't think IBO uses this.

> Of course. But, again, if you are not interested in what has happened at
> other users' locations, you can use an IBO Bookmark to invalidate that
> single row in the buffer. Then, IBO will re-fetch that row, just as it
> does with the real-time model.
That will require additional work/code. And what row should I invalidate if
I make an insert? (that row is not in the buffer)

> Also, IBO doesn't "Append" rows to datasets (although it can do so, if you
> want BDE compatibility). ("Append" is something that happens in file-based
> databases like Paradox, not in Firebird). IBO's inserts occur "in place"
> (also an artificial concept). So the buffer can be refreshed to get the
> latest view of, say, an ordered set, without disturbing the unchanged rows
> that were in the buffer.
When I wrote "append/post" I was referring to the methods you call on
ib_query. I know that an append/post is translated into an insert sql
statement. I know about sync after insert and after edit too. And I
understand the fact that you can invalidate a single row from the buffer
and ibo will refetch from the server only that row by building and runing
an sleect sql statement like: select ... from ... where pk = :pk.

> I don't believe that. I know that the Devrace guys made big noises about
> this when they implemented it as the default way to do datasets. You'd
> have thought they had just invented sliced bread!! It panders to those who
> want to transport huge datasets across the wire (a characteristic of bad
> design for client/server systems) and it does solve the endemic problem
> that FIBPlus and IBX have with stale buffers and their inability to create
> a transaction context that extends beyond the limits of the physical
> transaction. It's not a magic bullet; it's a workaround for deficiencies
> that IBO doesn't have.
> With IBO, you already have (and always had) the capability to use this
> model of dataset management, which works OK where work is dispersed across
> a wide range of the database. But to make it the default behaviour of
> datasets is to sacrifice the two main advantages of proper transaction
> control: atomicity and a low level of conflict. IBO takes care of these
> beautifully. It even makes it reasonable to use CommitRetaining (wisely,
> not blindly!!) and to close off unfinished transactions that have been
> abandoned or allowed to continue too long.
It seems that I don't fully understand the concepts behind IBO or Fib
because I see nothing wrong in having a separation in phisical transaction
between the select (readonly) and the inser/update/delete (read/write).
This separation allows you to commit your work (not CR) and thus closing
the write transaction without invalidating the select cursor from readonly
transaction. I don't understand how you "sacrifice the two main advantages
of proper transaction control: atomicity and a low level of conflict" by
using this model.
User input (insert/update/delete) for the erp application is 10% and the
rest 90% is automated input. (by automated I mean many users that use
barcode reader devices).
The output (select) for user input is 95% and for the automated input 5%.
The level of concurency for user input is 5% and for the automated input 95%.
So when for the user input I make a "select * from tb" I know that the
information that is returned is up to date even after 1 hour so no need for
refresh and for automated input the information displayed (obtained via
selects) is minimum, so i can refresh it many times to get the latest
version without worrying too much about performance.

Mihai Chezan