Subject | Re: [IBO] Long batch + [IBO] Complex Update |
---|---|
Author | Helen Borrie |
Post date | 2001-12-05T07:29:33Z |
Renato,
As you have demonstrated some *really crucial* client/server issues in your two postings, I hope you won't mind if I address both of them in one response...
At 04:13 AM 05-12-01 -0200, Renato Martins wrote:
Think, rather, of the set of data structures that your process needs to have in the server's memory cache. You can visualise this, much as you do your clientdatasets - except that it's not your client program doing the work, but the server. Your mission, should you decide to accept it, is to reproduce on the server the work that your client is currently doing.
(1) In InterBase, Post sends changes to the server but does not commit them, unless the transaction is set to Autocommit AND you have not explicitly called StartTransaction in your code
(2) Autocommit is NOT A GOOD THING for batched work because it forces an immediate disk write for every modified row (assuming you are using synchronous writes, which you should be). It is only good for a task where a user is editing or creating individual records interactively and randomly. This doesn't sound like your scenario.
(3) "Walking a dataset" on the client is entirely the wrong way to perform batched DML in a client/server database
I'm suspicious of a process that requires temporary tables to do work. The procedural capabiities of InterBase are such that temp tables are rarely (if ever) necessary or desirable. I can think of exceptional cases where a temp structure might be useful, e.g. where you have asynchronous sub-processes occurring, involving multiple users (and thus, multiple transactions) but, if these cases do apply then you should seriously reconsider a design that requires user creation of metadata objects to be part of a data manipulation process.
Those requirements are forced on you by certain RDBMSs - SQLServer and Access, for example - but, in InterBase, you are denying yourself the benefits of a RDBMS that was built to take care of everything right inside its own boundaries!
The key things to examine in InterBase are the query and the parameter. IBO fully exploits the efficiency and integrity of parameterized SQL and a database management system that never allows users to physically touch data at all.
Do provide a bit more detail about what your process has to do. It is most likely that several people will come up with ways to simplify it and speed it up.
regards,
Helen
All for Open and Open for All
InterBase Developer Initiative · http://www.interbase2000.org
_______________________________________________________
As you have demonstrated some *really crucial* client/server issues in your two postings, I hope you won't mind if I address both of them in one response...
At 04:13 AM 05-12-01 -0200, Renato Martins wrote:
>I´ve got a query which updates a table, record by record. It tooks a longOK, now, here is your first problem. You need to be looking at your updates as SERVER operations, not as client operations. In other words, if the data involved in these updates live on the server and are not changed by the user, don't bring them across to the client except to provide selections for the user.
>time to run (with a local server, it takes about 14 minutes; I´ve got about
>1000 records, which get their values through lots of queries...).
>I´ve noticed a great disk activity in the process. It seems that everyWe'll come to that - it sounds as though you do not have your transaction under control. Batched work on the server should be done within the context of a single transaction, and committed as a whole, not row-by-row.
>update I make (by setting a TField's value), is written to disk.
>Two questions:FORGET about TClientDataSet - it is entirely inappropriate for client/server work, since (a) it requires that all data involved in processing be pulled across to the client and (b) it detaches the data from the isolation and integrity of being selected, updated and committed within a single transaction context. IBO can serve you better than that!
> 1. Is there a way to apply the whole update only at the end of the job,
>and do the work "silently" (I couldn´t create an SP to do the job, the
>values are obtained from really complex conditions...)?? I mean, as if I was
>using TClientDataSet, but without the record fetching...
Think, rather, of the set of data structures that your process needs to have in the server's memory cache. You can visualise this, much as you do your clientdatasets - except that it's not your client program doing the work, but the server. Your mission, should you decide to accept it, is to reproduce on the server the work that your client is currently doing.
>BTW, I am already using TIB_Cursor, and TIB_Query, with a TIB_DSQL here andThat doesn't help much. For complex processing on batches of rows, TIB_DSQL is your greatest friend - it can execute anything. From your (rather sparse) description, I doubt that you should consider any use for TIB_Cursor and TIB_Query in the context of your batched DML, except to display candidate rows and/or parameter pick-lists, as a way for your user to pass her requirements as parameters to the statement in your ib_dsql.
>there.
> 2. If I update a record (by putting it in edit mode through code, andI'm assuming here that your program is somehow walking through a dataset in Edit mode, doing something to a column in row and then calling Post? Three things to understand here are
>changing their TField values), doesn´t call Post, and then I move to another
>record, is it going to be updated????
(1) In InterBase, Post sends changes to the server but does not commit them, unless the transaction is set to Autocommit AND you have not explicitly called StartTransaction in your code
(2) Autocommit is NOT A GOOD THING for batched work because it forces an immediate disk write for every modified row (assuming you are using synchronous writes, which you should be). It is only good for a task where a user is editing or creating individual records interactively and randomly. This doesn't sound like your scenario.
(3) "Walking a dataset" on the client is entirely the wrong way to perform batched DML in a client/server database
>I´ve got a situation like this. I have to populate a table, with all theOK...
>days of a month, for each of the workers (let´s call it Table1). All right,
>I´ve done this with a procedure that goes reasonably fast (generates +1000
>records in about 5 seconds).
>Then I must generate a temporary table with the accesses the workers had iner...OK...except that you should be using a permanent table that gets emptied and refilled when you want temporary tabular storage...this is not Paradox, where creating and deleting objects is just a matter of creating and deleting files in a directory somewhere...
>that month. Ok, insert ... select solved here (call it table2).
>Now comes the hard part.This is what SQL is designed for - but you have not provided any detail about what is so complex as to defeat SQL and stored procedures! If users are not involved in this processing, except to select parameters, then for sure there will be a statement (most likely, a call to a stored procedure) which will do the trick.
>For each record in table1, I must look data in table2, based on *dozens* of
>conditions, to update them (the record on table1).
>I couldn´t imagine an update / select which could do the work.
>So, I went with the infamous while not ...eof.We are talking about a client-side operation on a dataset here, not a looping stored procedure....right?
>The problem is I have to do two loops in the table, and the table has moreThis is the worst way to go. You need a server-side procedure with input parameters. Possibly, a master stored procedure that calls other procedures...You need a UI on the client side for your user to select and pass parameters to the procedure...and possibly you need a UI through which your server-side process can feed back the results of processing to the user. That is all your client program should be attempting to do.
>than 1000 records.
>
>Running in the local server, it took 10+ minutes to go.
>If I can´t do with an update, is there any other way (I mean, not using
>while eof?).
>BTW, I was doing it with IBXs table component, and migrated to IBOs TableIf you are intending to do any serious work with InterBase, you *will* need to get your head around queries, lose that table component and lose that spreadsheet mentality. Whilst TIBOTable is loaded with functionality that is absent (and simply not achievable) in TIBTable, a table component is a very inflexible choice for client/server work, for anything except a job that actually needs to display all of the columns and all of the rows in a single table without picking up related data from other tables. You should not be trying to do batched work in table components (or any dataset components, for that matter).
>component, and it didn´t get any better...
I'm suspicious of a process that requires temporary tables to do work. The procedural capabiities of InterBase are such that temp tables are rarely (if ever) necessary or desirable. I can think of exceptional cases where a temp structure might be useful, e.g. where you have asynchronous sub-processes occurring, involving multiple users (and thus, multiple transactions) but, if these cases do apply then you should seriously reconsider a design that requires user creation of metadata objects to be part of a data manipulation process.
Those requirements are forced on you by certain RDBMSs - SQLServer and Access, for example - but, in InterBase, you are denying yourself the benefits of a RDBMS that was built to take care of everything right inside its own boundaries!
The key things to examine in InterBase are the query and the parameter. IBO fully exploits the efficiency and integrity of parameterized SQL and a database management system that never allows users to physically touch data at all.
Do provide a bit more detail about what your process has to do. It is most likely that several people will come up with ways to simplify it and speed it up.
regards,
Helen
All for Open and Open for All
InterBase Developer Initiative · http://www.interbase2000.org
_______________________________________________________