Subject | Re: [firebird-support] Cache retention? |
---|---|
Author | Mathias Dellaert |
Post date | 2006-04-29T07:12:15Z |
Ann W. Harrison wrote:
the release because in this case I close the transaction and the
connection at pretty much the same time (of course it would have been
easy to test so chalk this up to laziness on my part). Thanks for the
clarification on when the memory is released. I'll try your idea of
keeping a dummy connection open.
update is still triggered manually) and I'm considering adding the
backup/restore to the daily update cycle. The way the warehouse is
currently built there are more updates than inserts.
service after building the warehouse and doing a backup/restore but
maybe I performed a last correction cycle afterwards. I'll do some more
reliable tests with performance monitor when I get a chance.
remote access to the development machine. I forgot to note that the
bottleneck in these operations seems to be the disk, as the processor is
barely showing any activity at all.
Thank you for your swift and competent help.
Regards,
MD
> Mathias Dellaert wrote:Yes, I'm aware of the differences. I just wasn't sure which triggered
>
>> However I don't think I got the memory use settings quite right. It
>> seems to me that firebird always releases memory as soon as the
>> transactions/connections close,
>
> Err, yes, sort of. There's are significant differences between
> transactions and connections - in Firebird, a single connection
> can have multiple active transactions, and in any database, a
> connection can have a series of transactions.
the release because in this case I close the transaction and the
connection at pretty much the same time (of course it would have been
easy to test so chalk this up to laziness on my part). Thanks for the
clarification on when the memory is released. I'll try your idea of
keeping a dummy connection open.
> To avoid the possibility that there's a problem with old dataI currently do that after an update of the warehouse (in this stage an
> (aka garbage), you might backup and restore your database after
> you load it. However, if your loads are just inserts - no
> updates or deletes - then the backup/restore doesn't matter
> and you should not be writing to the database at all if the
> database is read-only and should write only two pages per
> transaction for read/write databases.
>
update is still triggered manually) and I'm considering adding the
backup/restore to the daily update cycle. The way the warehouse is
currently built there are more updates than inserts.
> If your database is read-only, I wonder where the writes areHmmm ... you're making me wonder now ... I'm pretty sure I restarted the
> coming from, unless the task manager is counting copying to
> memory as a write.
service after building the warehouse and doing a backup/restore but
maybe I performed a last correction cycle afterwards. I'll do some more
reliable tests with performance monitor when I get a chance.
>> An additional problem I'm having is filling the warehouse and performingI will do that as soon as I'm either back at my workspace or regain
>> database-wide updates (obviously I turn read-only off for this). For
>> instance it's almost impossible to do a simple UPDATE Sales SET outQTY =
>> outQTY*-1; (updates 500k records, outQTY is not indexed) (I gave up
>> after it hadn't returned after 20 minutes).
>>
>
> Hmmm... that's odd. Could you run that query in ISQL after doing
> a SET STATS? Wait for it to finish and send the stats results.
>
remote access to the development machine. I forgot to note that the
bottleneck in these operations seems to be the disk, as the processor is
barely showing any activity at all.
Thank you for your swift and competent help.
Regards,
MD