Subject | Re: [firebird-support] Cache retention? |
---|---|
Author | Ann W. Harrison |
Post date | 2006-04-28T15:14:40Z |
Mathias Dellaert wrote:
transactions and connections - in Firebird, a single connection
can have multiple active transactions, and in any database, a
connection can have a series of transactions. The problem here
is connections.
The cache pages stay in memory as long as there is an active
connection, but when the last connection closes, the server
closes the database completely, releasing all pages from the
cache. For most applications, that's the right thing - though
a 5 second wait would really help in some cases, like our test
system, where a series of connections start, run, and close, one
after the other, flushing the page and metadata caches....
Sorry for that digression... In your case, filling memory with
40K pages, then dumping it is a terrible waste for a machine that's
dedicated to one unchanging database. So, I'm going to suggest
something that is generally considered a really bad idea: start
a dummy connection and just leave it open. You don't even need
to start a transaction on that connection - though most tools
do you the "favor" of starting a transaction for you. Leave that
connection open forever. It will keep the database open and
protect your cache.
Normally long lived connections and especially long lived
transactions cause problems by blocking garbage collection.
But, if you don't have any updates, you don't have any garbage
to collect.
ends with the end of the query. Keeping an open connection will
fix that problem.
coming from, unless the task manager is counting copying to
memory as a write.
To avoid the possibility that there's a problem with old data
(aka garbage), you might backup and restore your database after
you load it. However, if your loads are just inserts - no
updates or deletes - then the backup/restore doesn't matter
and you should not be writing to the database at all if the
database is read-only and should write only two pages per
transaction for read/write databases.
a SET STATS? Wait for it to finish and send the stats results.
Thanks,
Ann
>Err, yes, sort of. There's are significant differences between
> However I don't think I got the memory use settings quite right. It
> seems to me that firebird always releases memory as soon as the
> transactions/connections close,
transactions and connections - in Firebird, a single connection
can have multiple active transactions, and in any database, a
connection can have a series of transactions. The problem here
is connections.
The cache pages stay in memory as long as there is an active
connection, but when the last connection closes, the server
closes the database completely, releasing all pages from the
cache. For most applications, that's the right thing - though
a 5 second wait would really help in some cases, like our test
system, where a series of connections start, run, and close, one
after the other, flushing the page and metadata caches....
Sorry for that digression... In your case, filling memory with
40K pages, then dumping it is a terrible waste for a machine that's
dedicated to one unchanging database. So, I'm going to suggest
something that is generally considered a really bad idea: start
a dummy connection and just leave it open. You don't even need
to start a transaction on that connection - though most tools
do you the "favor" of starting a transaction for you. Leave that
connection open forever. It will keep the database open and
protect your cache.
Normally long lived connections and especially long lived
transactions cause problems by blocking garbage collection.
But, if you don't have any updates, you don't have any garbage
to collect.
> which means that for every query it hasAs above, only if every query runs in its own connection which
> to read everything from disk again.
ends with the end of the query. Keeping an open connection will
fix that problem.
> At the same time I/O reads/writes are very high: about anIf your database is read-only, I wonder where the writes are
> hour of moderate testing (one user) gave I/O reads of 14 Gb and writes
> of 4.6 Gb (these values are from task manager, yes I am aware that is
> not too trustworthy, but it gives an indication).
coming from, unless the task manager is counting copying to
memory as a write.
To avoid the possibility that there's a problem with old data
(aka garbage), you might backup and restore your database after
you load it. However, if your loads are just inserts - no
updates or deletes - then the backup/restore doesn't matter
and you should not be writing to the database at all if the
database is read-only and should write only two pages per
transaction for read/write databases.
> An additional problem I'm having is filling the warehouse and performingHmmm... that's odd. Could you run that query in ISQL after doing
> database-wide updates (obviously I turn read-only off for this). For
> instance it's almost impossible to do a simple UPDATE Sales SET outQTY =
> outQTY*-1; (updates 500k records, outQTY is not indexed) (I gave up
> after it hadn't returned after 20 minutes).
a SET STATS? Wait for it to finish and send the stats results.
Thanks,
Ann