Subject | Re: [IBO] General question regarding querying records from the server |
---|---|
Author | Helen Borrie |
Post date | 2004-03-02T22:12:32Z |
At 09:13 PM 2/03/2004 +0000, you wrote:
SQL*. If the user needs multiple rows then don't set MaxRows to 1. Rows
come across the network in packets, which is efficient. "Speed" can be
perceived in two ways - how fast the first packet arrives and how long it
takes to get the whole set. In well-designed apps, it's not an issue. You
need to understand that, if your SQL requests a huge set then you are stuck
with it - one way or the other, you are going to have to get *all* of the
rows by the time the transaction completes.
It's of more importance to understand what the users need to see and
do. The general principle is not to request huge sets. Provide "select
lists" to limit the dynamic fetching to what the user actually needs to
have available.
until a change is *posted to the server*. Putting a dataset into Edit
state does nothing on the server unless you explicitly request a
pessimistic lock -- unnecessary with Fb/IB in the 99% case.
multi-user. The secret is in transaction isolation, not locking. This
RDBMS doesn't use locks, it uses record versions. The strategy for blocking
is optimistic (no blocking until an update is actually posted) and
row-level. Table-level locking is possible but is rarely used.
It's of no account if the same users have queries open on the same table or
have overlapping datasets in their respective transactions. The conflict
issues arise if they are frequently performing updates on the same *rows*
and the application is not handling the conflicts efficiently.
to commit their work. The main thing about timely committing is that it
frees up resources (memory and disk) that otherwise get
stuck. Badly-written applications tend to slow down rapidly and freeze
up. Memory gets consumed with transaction state data that the system can't
eliminate and searches get slower and slower as the database pages get
filled with garbage that can't be collected. In these bad applications,
the more users with pending work, the faster the system will freeze up.
tiConsistency. tiCommitted is recommended for interactive work because the
latest version committed by other transactions becomes available to your
transaction immediately; tiConcurrency is ideal for reports because it
gives a "snapshot" view that lasts until the transaction
ends. tiConsistency is too heavy for just about anything - pretend it's
not there. It puts a table lock on every table accessed by the transaction.
All work (including Selects) happens in a transaction. So if you have a
grid linked to a read-write SELECT and you just leave it all day without a
"hard commit", you are going to have horrible problems. (Autocommit with
no forced hard commits is the main source of these...the typical BDE to IBO
conversion scenario...)
Your transaction settings will influence how the app responds to conflicts
if they happen. For example, if you have a lot of users all hitting the
exact same subset of rows constantly, you'll want to set LockWait to
true; if the chances of simultaneous hits on a single row is low, you'll
want it false, so you can deal with the lock conflict immediately.
One thing that ex-desktop programmers have to get past is the false belief
that lock conflicts are a bad thing. In multi-user systems with high
contention, they are the natural, expected thing and, if you use products
such as Delphi and IBO properly, you can manage the conflicts beautifully,
i.e. the way the DBMS was *designed* to work. If you try to force FB/IB
into the Access/Paradox/MSSQLServer two-phase locking model, you'll kill it.
Your application design should be targeted at *avoiding* fetched unwanted
rows rather than the typical Delphi grid-driven browse-and-edit in a
humungous read-write transaction. The "thinner" the client interface, the
more effective the application.
It is beneficial with client/server to design user interfaces that behave
like web forms - point-and-shoot, fast DML requests, delegating as much
"business" to the server as you can. You should design interfaces in terms
of autonomous tasks and wrap each task in a transaction that starts when
the task begins and ends when the task ends.
Using Autocommit is not a mortal sin. "Big tasks" - such as a human
editing dozens or hundreds of rows all in the same way - work best with
Autocommit because of the savings in server and client resources...but the
task should be broken up into discrete bites with a regular forced hard
commit to keep the server's housekeeping moving along. Timeoutprops are
good for this; they are also good for protecting the system from users who
leave unfinished work hanging about.
Helen
>When u use the TIB_Query object, one can set the MaxRows property toIn general, if you only need one row, only request one row *in your
>1, so that only one row at a time is downloaded from the server. As
>i understand this, it would reduce the network traffic load and time
>for download considerably (since i am doing all my development on one
>computer and do not have access to a network, i have to ask this
>question rather than test it out). Please let me know if i am
>missing something.
SQL*. If the user needs multiple rows then don't set MaxRows to 1. Rows
come across the network in packets, which is efficient. "Speed" can be
perceived in two ways - how fast the first packet arrives and how long it
takes to get the whole set. In well-designed apps, it's not an issue. You
need to understand that, if your SQL requests a huge set then you are stuck
with it - one way or the other, you are going to have to get *all* of the
rows by the time the transaction completes.
It's of more importance to understand what the users need to see and
do. The general principle is not to request huge sets. Provide "select
lists" to limit the dynamic fetching to what the user actually needs to
have available.
>If this does not work, then not opening queries unless there areEcho. :-)
>parameters filled in for search conditions that limit the size of the
>dataset to something reasonable must happen first.
>In addition, one can set the transaction properties such that theIt's incorrect. Fb/IB is an optimistic locking system. Locks don't occur
>query does not "lock" records unless it is in the edit state..
>if correct,
until a change is *posted to the server*. Putting a dataset into Edit
state does nothing on the server unless you explicitly request a
pessimistic lock -- unnecessary with Fb/IB in the 99% case.
>this should eliminate any loss of functionality of theYup, you're missing the fact that Fb/IB is *designed* to be
>server if several users have queries open on the same dataset.
>Please let me know again if i am missing something.
multi-user. The secret is in transaction isolation, not locking. This
RDBMS doesn't use locks, it uses record versions. The strategy for blocking
is optimistic (no blocking until an update is actually posted) and
row-level. Table-level locking is possible but is rarely used.
It's of no account if the same users have queries open on the same table or
have overlapping datasets in their respective transactions. The conflict
issues arise if they are frequently performing updates on the same *rows*
and the application is not handling the conflicts efficiently.
>To further prevent slowing down or stalling the server, one can setIt's correct that you can set timeoutprops to force (or encourage!) users
>timeoutprops so that changes that are made are committed quickly when
>a user places a row in the edit state. Is this correct?
to commit their work. The main thing about timely committing is that it
frees up resources (memory and disk) that otherwise get
stuck. Badly-written applications tend to slow down rapidly and freeze
up. Memory gets consumed with transaction state data that the system can't
eliminate and searches get slower and slower as the database pages get
filled with garbage that can't be collected. In these bad applications,
the more users with pending work, the faster the system will freeze up.
>as concerns insert and delete conditions, this will not affect othertiConcurrency is *one* isolation level. The others are tiCommitted and
>queries if tiConcurrency is not set? or will it make a difference.
>I'm not sure, perhaps someone could help me.
tiConsistency. tiCommitted is recommended for interactive work because the
latest version committed by other transactions becomes available to your
transaction immediately; tiConcurrency is ideal for reports because it
gives a "snapshot" view that lasts until the transaction
ends. tiConsistency is too heavy for just about anything - pretend it's
not there. It puts a table lock on every table accessed by the transaction.
All work (including Selects) happens in a transaction. So if you have a
grid linked to a read-write SELECT and you just leave it all day without a
"hard commit", you are going to have horrible problems. (Autocommit with
no forced hard commits is the main source of these...the typical BDE to IBO
conversion scenario...)
Your transaction settings will influence how the app responds to conflicts
if they happen. For example, if you have a lot of users all hitting the
exact same subset of rows constantly, you'll want to set LockWait to
true; if the chances of simultaneous hits on a single row is low, you'll
want it false, so you can deal with the lock conflict immediately.
One thing that ex-desktop programmers have to get past is the false belief
that lock conflicts are a bad thing. In multi-user systems with high
contention, they are the natural, expected thing and, if you use products
such as Delphi and IBO properly, you can manage the conflicts beautifully,
i.e. the way the DBMS was *designed* to work. If you try to force FB/IB
into the Access/Paradox/MSSQLServer two-phase locking model, you'll kill it.
Your application design should be targeted at *avoiding* fetched unwanted
rows rather than the typical Delphi grid-driven browse-and-edit in a
humungous read-write transaction. The "thinner" the client interface, the
more effective the application.
It is beneficial with client/server to design user interfaces that behave
like web forms - point-and-shoot, fast DML requests, delegating as much
"business" to the server as you can. You should design interfaces in terms
of autonomous tasks and wrap each task in a transaction that starts when
the task begins and ends when the task ends.
Using Autocommit is not a mortal sin. "Big tasks" - such as a human
editing dozens or hundreds of rows all in the same way - work best with
Autocommit because of the savings in server and client resources...but the
task should be broken up into discrete bites with a regular forced hard
commit to keep the server's housekeeping moving along. Timeoutprops are
good for this; they are also good for protecting the system from users who
leave unfinished work hanging about.
Helen