Subject Re: [IBO] Strange sorting and recordcount issue.
Author ajwreyford
--- In, Helen Borrie <helebor@...> wrote:
> On the client side you have output sets - the results of
> queries. These sets live in a series of buffers, including one
> contains all the key field values. You haven't said anything about
> your keylinks (from which IBO forms this buffer) so it would be
> impossible to give you an answer that explains in detail what's
> on with those sets.

My Keylinks is set to: ANIMAL.ANIMALID

> On the server side you have tables: the actual stored data. Your
> script operates on the tables. The server doesn't know anything
> about your client-side sets or IBO's buffers.
> On the client side you have the original buffers. These will have
> been kept up-to-date with any operations you performed on the
> sets. Anything you do at the client will work on the buffers in
> state they were in last time you did a client-side operation and
> reflect that state, e.g., inserts, deletes, key changes, etc.
> Until you refresh the output (close and open the set), the buffers
> don't have anything else to go on.

I initially suspected this, and initially I IB_QueryAnimal.close
before I executed the script, then IB_QueryAnimal.Open after the
script had successfully emptied the table server side.

I then populated the Query with the animals in the order oldest first
with animal Id = 1 to 820 to the youngest.

I would have expected the animalQuery to have been ordered in the
correct mannerat this point, but for some or other reason,
IB_QueryAnimal.First still returns 820.
The Recordcount was correct at this point 820.

If I closed and opend again, after populating the Query as mentioned
above, then it was ordered correctly.

So I understand you point abnout the server not knowing about the
buffered sets of IBO, but why does IBO buffered dataset, not maintain
the correct ordering, after the records have been added?

Is my keyField entry above, perhaps the offender?

> I suspect that you either don't have any Keylinks defined, or you
> have KeyLinksAutoDefine set on an unkeyed set. In that case, IBO
> will have the db_keys (rdb$db_key) as the keys. If you delete all
> the records on the server side and then fill the table again, the
> db_keys will have changed but, until you refresh the set, they will
> be wrong in the buffer.


> As for RecordCount, all it can ever be is either a) the result of a
> select count(*) over the defined set (which isn't done by default,
> because it is costly and unreliable), or b) a row-by-row count of
> records fetched from the server's output buffer. It becomes
> "accurate" (for the set only) once the last eligible record has
> fetched. In a multi-user environment, RecordCount can never be
> considered accurate; it's nothing but a snapshot of one of these
> things, that has a strong propensity to go out of date.
> IBO starts maintaining the RecordCount number itself, incrementing
> and decrementing according to positioned inserts and deletes that
> done on the client buffers and further fetches that are being done
> the buffer pointers head towards the last record. Once the last
> record is fetched, IBO's counter should be more or less
> accurate....but there's no way it could be, if you go off and do
> stuff to the table outside the context of the set.

OK .. the script confused it, but IF I don't do anything like that, I
can assume it will be accurate?
> If this is a serious question then I hope your experiences with
> design at least illustrate why it is not a dependable approach to
> getting a reliable set into multiple clients.
> Abandon this table
> idea and instead write a selectable stored procedure to produce the
> ordered set. This ensures that you will always be able to get an
> up-to-date snapshot without destroying any data.

I must get the hang of using SP's. I will try after reading up on it
in your book first.