Subject | RE: [IBO] Archiving data |
---|---|
Author | Riho-Rene Ellermaa |
Post date | 2004-09-02T06:15:09Z |
The domains are all VARCHAR's with size 20...35
Same transaction is used by both connections (i thought that would
handle 2-phase-commit):
QFrom->IB_Connection=DModul->IBConn; //working database
QTo->IB_Connection=DModul->ArchConn; // archive database
QFrom->IB_Transaction=DModul->Trans;
QTo->IB_Transaction=DModul->Trans;
Riho Ellermaa
_____
From: Lucas Franzen [mailto:luc@...]
Sent: Wednesday, September 01, 2004 5:01 PM
To: IBObjects@yahoogroups.com
Subject: Re: [IBO] Archiving data
Riho-Rene Ellermaa schrieb:
which doesn't show too much, since the used domains are of
unknown type ;-)
When doing large data transfer (inserts, updates, deletes) it's
recommended to commit every <n> records (there's no fixed number
for
this, it's depending on the size).
So if you commit every 100 or 1000 or 10000 inserts everything
will run
better.
And I can just see a transaction control for one of the two
databases
(don't know which one the Transaction is bound to).
You should definitely use a two-phase-commit technique when
handling two
databases, so every database stays "sane".
Luc.
[Non-text portions of this message have been removed]
Same transaction is used by both connections (i thought that would
handle 2-phase-commit):
QFrom->IB_Connection=DModul->IBConn; //working database
QTo->IB_Connection=DModul->ArchConn; // archive database
QFrom->IB_Transaction=DModul->Trans;
QTo->IB_Transaction=DModul->Trans;
Riho Ellermaa
_____
From: Lucas Franzen [mailto:luc@...]
Sent: Wednesday, September 01, 2004 5:01 PM
To: IBObjects@yahoogroups.com
Subject: Re: [IBO] Archiving data
Riho-Rene Ellermaa schrieb:
> Hi!encountered one who
>
> This approach works OK for "normal" users, but now I
> has LOTS of data in one table (structure is at the end ofmail).
which doesn't show too much, since the used domains are of
unknown type ;-)
> Each Post() added approx. 200 KB to the memory usage and mycomputer run
> out of memory very fast.The problem is that you don't do commits inbetween.
When doing large data transfer (inserts, updates, deletes) it's
recommended to commit every <n> records (there's no fixed number
for
this, it's depending on the size).
So if you commit every 100 or 1000 or 10000 inserts everything
will run
better.
And I can just see a transaction control for one of the two
databases
(don't know which one the Transaction is bound to).
You should definitely use a two-phase-commit technique when
handling two
databases, so every database stays "sane".
Luc.
[Non-text portions of this message have been removed]