Subject | Re: [IBO] DML updating using tib_cursor |
---|---|
Author | Helen Borrie |
Post date | 2002-09-27T16:04:14Z |
At 10:10 AM 27-09-02 -0500, you wrote:
In documentation we loosely use the term "user" when we mean "connection"
and, sometimes, "transaction" (or both). A user is only meaningful in
terms of a connection - so to do any work, your phantom user would (a) have
to be connected and b) have some way to know that an application wanted it
to update something. In a sense, DMLCaching gives you what you describe,
without a "different" user being involved, since it involves database events.
cursor after closing and reopening datasets, and after committing work,
respectively.
making a self-perpetuating problem for yourself, by denying the dataset its
native behaviour of updating itself.
Basically, inserting, deleting and editing the current row are "things that
a dataset knows how to do" in our OO model. IBO hides the whole process of
the dataset constructing the SQL required for it to achieve these DML
operations.
With a plain-jane dataset, there is virtually nothing for you as the
programmer to do except to ensure that the dataset knows how to locate the
underlying row in the dataset (by supplying KeyLinks) and selecting the
BufferSynchroFlags to set up how the dataset should respond to
changes. With a more complex dataset, such as one involving joins, you
help the dataset to achieve the DML operations by supplying custom SQL in
the InsertSQL, EditSQL and DeleteSQL properties.
Whether your dataset uses the default SQL statements or custom ones you
have supplied in the xxxxSQL properties, refreshing the dataset after a
commit is sufficient to give your application the updated view of the
database state. An appropriate combination of BufferSynchroFlags,
RefreshAction and CommitAction will allow you to set up the exact behaviour
you want.
What DML caching does is to add another level of synchronisation (or three
more levels, to be more exact). It makes your datasets aware of changes
committed by other transactions immediately those commits occur, rather
than waiting until next time the dataset's work is committed. If you don't
code in what you want the dataset to do in response to a DMLCache message
(using the OnDMLCacheAnnounceItem and OnDMLCacheReceiveItem events) the
behaviour with DMLCaching will be no different to the standard behaviour of
IBO datasets...
Helen
>What if.....Erm...no.
>1. I create a generic user that no one has ready access to.
>2. In the database I give this user all access to _DML_ tables.
>3. In the database I create a procedure that posts a update
> record in DML$IBOCACHE (much like the DML$IBOCACHE trigger)
>4. I can then call this procedure from any app.
>
>Then this will, in effect, simulate another user updating the
>current _live_ record that the user is working with. ????
>
>Steve Fields
In documentation we loosely use the term "user" when we mean "connection"
and, sometimes, "transaction" (or both). A user is only meaningful in
terms of a connection - so to do any work, your phantom user would (a) have
to be connected and b) have some way to know that an application wanted it
to update something. In a sense, DMLCaching gives you what you describe,
without a "different" user being involved, since it involves database events.
>I was under the impression that to commit itNo. You have RefreshAction and CommitAction to control the position of the
> > would close the datasets relating to the database and
> > would therefore _lose_ the record I was letting the user
> > view.
cursor after closing and reopening datasets, and after committing work,
respectively.
> > This is for a basic invoice/lineitem type of system,I've kinda lost the original description but I get the impression you are
> > with twists relevant to our situation. I would make the
> > changes in the background to the line items and the base
> > totals, etc with a tib_cursor or a tib_dsql, (whatever it
> > takes) but needed the values to update immediately on posting
> > the lineitem changes. I am using all of the DML settings as
> > true for each dataset involved (TIB_Queries and a few
> > tib_cursors).
making a self-perpetuating problem for yourself, by denying the dataset its
native behaviour of updating itself.
Basically, inserting, deleting and editing the current row are "things that
a dataset knows how to do" in our OO model. IBO hides the whole process of
the dataset constructing the SQL required for it to achieve these DML
operations.
With a plain-jane dataset, there is virtually nothing for you as the
programmer to do except to ensure that the dataset knows how to locate the
underlying row in the dataset (by supplying KeyLinks) and selecting the
BufferSynchroFlags to set up how the dataset should respond to
changes. With a more complex dataset, such as one involving joins, you
help the dataset to achieve the DML operations by supplying custom SQL in
the InsertSQL, EditSQL and DeleteSQL properties.
Whether your dataset uses the default SQL statements or custom ones you
have supplied in the xxxxSQL properties, refreshing the dataset after a
commit is sufficient to give your application the updated view of the
database state. An appropriate combination of BufferSynchroFlags,
RefreshAction and CommitAction will allow you to set up the exact behaviour
you want.
What DML caching does is to add another level of synchronisation (or three
more levels, to be more exact). It makes your datasets aware of changes
committed by other transactions immediately those commits occur, rather
than waiting until next time the dataset's work is committed. If you don't
code in what you want the dataset to do in response to a DMLCache message
(using the OnDMLCacheAnnounceItem and OnDMLCacheReceiveItem events) the
behaviour with DMLCaching will be no different to the standard behaviour of
IBO datasets...
Helen