Subject | RE: [firebird-support] CONTROLLING BY TABLE AMOUNT OF INFORMATION READ |
---|---|
Author | Svein Erling Tysvær |
Post date | 2010-05-31T07:49Z |
>> Hi,Well, maybe not enough for an intelligent solution, but enough for responding with a few questions...
>> I have a response time problem in a Delphi application. There is a
>> table with many small records that I have to process locally
>> sequentially until I get a record that satisfies a condition -can`t do
>> with a stored procedure-
>> Would like to read a record -along with many others in the page- so I
>> minimize network transfers.
>> Have tried increasing cache but don`t see a difference.
>> Could you recommend a tool that could help me to monitor the effect of
>> changing configuration parameters?
>> I´m currently using Interbase drivers. Could I get better options with
>> dbExpress?
>>
>> Hope you guys can help me-
> I don't think you've given us enough information to make an intelligent
> response
> Alan
Your description to me seems like it could be written by a programmer that is used to do things locally on a computer and that is not used to handling databases in a client/server environment (where there can be multiple simultaneous users). If so, odds are that you can get lots of useful suggestions on this list by providing more information.
What does 'many small records' mean? 100, 100 million or something else? I can understand that some checking has to be done locally, but generally it is a huge advantage to do some of the checking in your queries. You may be able to cut the response time by a noticeable amount by minimizing network transfers or changing configuration, but small changes to queries may change the execution time from hours to seconds.
The closest situation that I sometimes encounter to having to go through records sequentially, is when I'm supposed to select records randomly (and sometimes I have to select persons randomly rather than records). Then I typically count the number of records that fit my criteria, sequentially go through the primary key (or DISTINCT PERSON_ID) and select those that match a random algorithm. After that, I get the actual data of the records that have been chosen. Hence, I only select the entire row for records of interest and things works sufficiently quickly for my case.
So please tell us more about your situation and we may try to come up with a different solution. It is very rare that people have to write programs that go through all fields for all records in a table (of course there are exceptions, lookup tables with a few records being one of them, 'output tables' for export to other programs or databases being another).
Set